Está en la página 1de 382

Symantec NetBackup

PureDisk™ Administrator's
Guide

Windows, Linux, and UNIX

Release 6.6.0.2

Publication release 6.6.0.2, revision 1


The software described in this book is furnished under a license agreement and may be used
only in accordance with the terms of the agreement.

Documentation version: 6.6.0.2, revision 1

Legal Notice
Copyright © 2009 Symantec Corporation. All rights reserved.

Symantec, the Symantec Logo, and PureDisk are trademarks or registered trademarks of
Symantec Corporation or its affiliates in the U.S. and other countries. Other names may be
trademarks of their respective owners.

This Symantec product may contain third party software for which Symantec is required
to provide attribution to the third party (“Third Party Programs”). Some of the Third Party
Programs are available under open source or free software licenses. The License Agreement
accompanying the Software does not alter any rights or obligations you may have under
those open source or free software licenses. Please see the Third Party Legal Notice Appendix
to this Documentation or TPIP ReadMe File accompanying this Symantec product for more
information on the Third Party Programs.

The product described in this document is distributed under licenses restricting its use,
copying, distribution, and decompilation/reverse engineering. No part of this document
may be reproduced in any form by any means without prior written authorization of
Symantec Corporation and its licensors, if any.

THE DOCUMENTATION IS PROVIDED "AS IS" AND ALL EXPRESS OR IMPLIED CONDITIONS,
REPRESENTATIONS AND WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT,
ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO
BE LEGALLY INVALID. SYMANTEC CORPORATION SHALL NOT BE LIABLE FOR INCIDENTAL
OR CONSEQUENTIAL DAMAGES IN CONNECTION WITH THE FURNISHING,
PERFORMANCE, OR USE OF THIS DOCUMENTATION. THE INFORMATION CONTAINED
IN THIS DOCUMENTATION IS SUBJECT TO CHANGE WITHOUT NOTICE.

The Licensed Software and Documentation are deemed to be commercial computer software
as defined in FAR 12.212 and subject to restricted rights as defined in FAR Section 52.227-19
"Commercial Computer Software - Restricted Rights" and DFARS 227.7202, "Rights in
Commercial Computer Software or Commercial Computer Software Documentation", as
applicable, and any successor regulations. Any use, modification, reproduction release,
performance, display or disclosure of the Licensed Software and Documentation by the U.S.
Government shall be solely in accordance with the terms of this Agreement.
Symantec Corporation
350 Ellis Street
Mountain View, CA 94043

http://www.symantec.com
Technical Support
Symantec Technical Support maintains support centers globally. Technical
Support’s primary role is to respond to specific queries about product features
and functionality. The Technical Support group also creates content for our online
Knowledge Base. The Technical Support group works collaboratively with the
other functional areas within Symantec to answer your questions in a timely
fashion. For example, the Technical Support group works with Product Engineering
and Symantec Security Response to provide alerting services and virus definition
updates.
Symantec’s maintenance offerings include the following:
■ A range of support options that give you the flexibility to select the right
amount of service for any size organization
■ Telephone and Web-based support that provides rapid response and
up-to-the-minute information
■ Upgrade assurance that delivers automatic software upgrade protection
■ Global support that is available 24 hours a day, 7 days a week
■ Advanced features, including Account Management Services
For information about Symantec’s Maintenance Programs, you can visit our Web
site at the following URL:
www.symantec.com/techsupp/

Contacting Technical Support


Customers with a current maintenance agreement may access Technical Support
information at the following URL:
www.symantec.com/techsupp/
Before contacting Technical Support, make sure you have satisfied the system
requirements that are listed in your product documentation. Also, you should be
at the computer on which the problem occurred, in case it is necessary to replicate
the problem.
When you contact Technical Support, please have the following information
available:
■ Product release level
■ Hardware information
■ Available memory, disk space, and NIC information
■ Operating system
■ Version and patch level
■ Network topology
■ Router, gateway, and IP address information
■ Problem description:
■ Error messages and log files
■ Troubleshooting that was performed before contacting Symantec
■ Recent software configuration changes and network changes

Licensing and registration


If your Symantec product requires registration or a license key, access our technical
support Web page at the following URL:
www.symantec.com/techsupp/

Customer service
Customer service information is available at the following URL:
www.symantec.com/techsupp/
Customer Service is available to assist with the following types of issues:
■ Questions regarding product licensing or serialization
■ Product registration updates, such as address or name changes
■ General product information (features, language availability, local dealers)
■ Latest information about product updates and upgrades
■ Information about upgrade assurance and maintenance contracts
■ Information about the Symantec Buying Programs
■ Advice about Symantec's technical support options
■ Nontechnical presales questions
■ Issues that are related to CD-ROMs or manuals
Maintenance agreement resources
If you want to contact Symantec regarding an existing maintenance agreement,
please contact the maintenance agreement administration team for your region
as follows:

Asia-Pacific and Japan customercare_apac@symantec.com

Europe, Middle-East, and Africa semea@symantec.com

North America and Latin America supportsolutions@symantec.com

Additional enterprise services


Symantec offers a comprehensive set of services that allow you to maximize your
investment in Symantec products and to develop your knowledge, expertise, and
global insight, which enable you to manage your business risks proactively.
Enterprise services that are available include the following:

Symantec Early Warning Solutions These solutions provide early warning of cyber attacks, comprehensive threat
analysis, and countermeasures to prevent attacks before they occur.

Managed Security Services These services remove the burden of managing and monitoring security devices
and events, ensuring rapid response to real threats.

Consulting Services Symantec Consulting Services provide on-site technical expertise from
Symantec and its trusted partners. Symantec Consulting Services offer a variety
of prepackaged and customizable options that include assessment, design,
implementation, monitoring, and management capabilities. Each is focused on
establishing and maintaining the integrity and availability of your IT resources.

Educational Services Educational Services provide a full array of technical training, security
education, security certification, and awareness communication programs.

To access more information about Enterprise services, please visit our Web site
at the following URL:
www.symantec.com
Select your country or language from the site index.
Contents

Technical Support ............................................................................................... 4


Chapter 1 External directory service authentication ...................... 17
About external directory service authentication ................................ 18
Assumptions ......................................................................... 18
User accounts ........................................................................ 19
Obtaining directory service information ........................................... 19
Example Active Directory service .............................................. 21
Example OpenLDAP directory service ........................................ 24
(Optional) Adding PureDisk groups to your directory service ................ 28
(Optional) Verify TLS and copy the CA certificate ............................... 29
Verifying the server name ....................................................... 29
Writing the certificate to the storage pool authority ..................... 30
Linking PureDisk to the external directory service ............................. 33
Configuring communication ..................................................... 34
Managing user groups ............................................................. 39
Enabling the PureDisk system policy that synchronizes PureDisk
with an external directory service ............................................. 40
Completing the General tab ...................................................... 41
Completing the Scheduling tab ................................................. 42
About maintaining synchronization between PureDisk and an external
directory service .................................................................... 42
Adding, changing, or deleting users or groups ................................... 43
Changing the youruserclass, yourloginattrib, or yournameattrib
variables in your directory service’s ldap.xml file ......................... 44
Changing the yourdescriptionattrib variable or the yourmailattrib
variable in your directory service’s ldap.xml file .......................... 44
Disabling external authentication ................................................... 45
Changing the TLS specification ....................................................... 46
Modifying the base search path ...................................................... 46

Chapter 2 Single-port communication .............................................. 49

About single port communication ................................................... 49


Configuring single-port communication ........................................... 49
Configuring your domain name server (DNS) and firewall .............. 51
8 Contents

Adding FQDNs to each service .................................................. 53


Creating a new department with single-port settings .................... 53
Specifying port number 443 as the default port in the
configuration file template ................................................ 56
(Conditional) Configuring port 443 in replication policies .............. 57
Installing agent software on the clients or moving clients .............. 58

Chapter 3 Data replication ................................................................... 59


About data replication .................................................................. 59
About data replication and PureDisk release levels ............................. 60
About data replication policies ....................................................... 61
Creating or editing a data replication policy ...................................... 62
Completing the General tab for a Replication policy ...................... 63
Completing the Data Selections tab for a Replication policy ........... 64
Completing the Scheduling tab for a Replication policy ................. 66
Completing the Parameters tab for a Replication policy ................. 66
Replication jobs ........................................................................... 68
Copying and deleting a replication policy .......................................... 68
Managing replicated data selections ................................................ 68
Viewing replicated data ........................................................... 69
Working with replicated agents and data selections ...................... 69
Copying replicated data to clients on the destination storage
pool ............................................................................... 70
Restoring replicated data back to clients on the source storage
pool ............................................................................... 70
Restoring replicated Oracle data ............................................... 71
Tuning replication ....................................................................... 71

Chapter 4 Exporting data to NetBackup ............................................ 73


About exporting data to NetBackup ................................................. 73
Export limitations .................................................................. 74
Requirements for exporting data to NetBackup ............................ 74
Requirements for restoring data from NetBackup ........................ 75
Enabling and using the NetBackup export engine ......................... 75
Configuring PureDisk and NetBackup for export capability .................. 75
Configuring NetBackup to receive data exported from PureDisk
..................................................................................... 79
Configuring PureDisk to export data to NetBackup ....................... 84
Creating or editing an export to NetBackup policy .............................. 85
Completing the General tab for an Export to NetBackup
policy ............................................................................. 87
Contents 9

Completing the Data Selections tab for an Export to NetBackup


policy ............................................................................. 88
Completing the Scheduling tab for an Export to NetBackup
policy ............................................................................. 89
Completing the Parameters tab for an Export to NetBackup
policy ............................................................................. 89
(Optional) Completing the Metadata tab for an Export to
NetBackup policy ............................................................. 89
Running an export to NetBackup policy ............................................ 91
Performing a point-in-time export to NetBackup ................................ 91
Troubleshooting export job failures ................................................. 92
NetBackup export engine log files ............................................. 93
Problems with inactive server agents ......................................... 93
Copying or deleting an export to NetBackup policy ............................. 94
Restoring from NetBackup ............................................................. 94
Restoring to a PureDisk client that is not a NetBackup client ................ 96
Restoring to a PureDisk client that is also a NetBackup client ............... 97

Chapter 5 Disaster recovery backup procedures ............................ 99


About disaster recovery backup procedures ...................................... 99
About performing disaster recovery backups ................................... 100
About backing up your PureDisk environment using NetBackup ......... 101
Prerequisites for NetBackup disaster recovery backups ............... 102
Configuring the NetBackup client software ............................... 102
Enabling NetBackup for PureDisk backups ................................ 104
About NetBackup policy names ............................................... 106
Configuring PureDisk disaster recovery backup policies .................... 106
Completing the General tab for a disaster recovery backup
policy ........................................................................... 108
Completing the Scheduling tab for a disaster recovery backup
policy ........................................................................... 109
Completing the Parameters tab for a disaster recovery backup
policy ........................................................................... 109
About backing up your PureDisk environment using scripts ............... 114
Prerequisites for script-based disaster recovery backups ............. 114
PureDisk’s disaster recovery backup or restore script
examples ...................................................................... 114
Creating a backup script ........................................................ 115
Troubleshooting a disaster recovery backup .................................... 117
Missing pdkeyutil file ............................................................ 117
Content router modes set incorrectly ....................................... 118
10 Contents

Chapter 6 Disaster recovery for unclustered storage


pools ............................................................................... 121
About restoring an unclustered PureDisk environment ..................... 121
When to restore your environment .......................................... 122
Restore overview for an unclustered storage pool ....................... 122
Reinstalling required software (unclustered recovery) ....................... 123
Reinstalling PDOS ................................................................ 124
(Conditional) Reconfiguring the storage partitions on DAS/SAN
disks ............................................................................ 125
(Conditional) Reconfiguring the storage partitions on iSCSI
disks ............................................................................ 129
Completing the software reinstallation ..................................... 132
Performing a disaster recovery of an unclustered PureDisk storage
pool from a NetBackup disaster recovery backup (NetBackup,
unclustered recovery) ............................................................ 133
(Conditional) Cleaning up after a failed full disaster recovery
backup (NetBackup, unclustered recovery) .......................... 134
Using the DR_Restore_all script (NetBackup, unclustered
recovery) ...................................................................... 134
Performing a disaster recovery from a Samba backup (Samba,
unclustered recovery) ............................................................ 138
(Conditional) Recreate your topology information (Samba,
unclustered recovery) ...................................................... 138
(Conditional) Removing corrupted files from an incomplete
backup (Samba, unclustered recovery) ................................ 140
(Conditional) Preparing the storage pool authority node for
disaster recovery (Samba, unclustered recovery) .................. 141
Using the DR_Restore_all script (Samba, unclustered
recovery) ...................................................................... 141
Performing a disaster recovery from a third-party product backup
(third-party, unclustered recovery) .......................................... 147
(Conditional) Recreate your topology information (third-party,
unclustered recovery) ...................................................... 147
(Conditional) Removing corrupted files from an incomplete full
disaster recovery backup (third-party, unclustered
recovery) ...................................................................... 149
(Conditional) Preparing the storage pool authority node for
disaster recovery (third-party, unclustered recovery) ............ 150
Using the DR_Restore_all script (third-party, unclustered
recovery) ...................................................................... 150
Contents 11

Chapter 7 Disaster recovery for clustered storage pools ............. 157


About restoring a clustered PureDisk environment ........................... 157
Recovering from a single-node failover ........................................... 159
Recovering one active node .......................................................... 160
Reinstalling the PDOS software and the VCS software ................. 160
Recreating disks and volumes ................................................. 162
Running the DR_Restore_all script .......................................... 165
Recovering from a data storage corruption ...................................... 166
Recreate the storage partitions that failed ................................. 166
Recreating disks and volumes ................................................. 167
Running the DR_Restore_all script .......................................... 168
Recovering from a complete storage pool disaster (clustered, complete
storage pool disaster) ............................................................ 169
Reinstalling PDOS and disabling the service groups .................... 169
Recreating disks and volumes ................................................. 170
Running the DR_Restore_all script .......................................... 170
(Conditional) Cleaning up after a failed full disaster recovery
backup ............................................................................... 172
Cleaning up after a failed full NetBackup disaster recovery backup
(clustered, complete storage pool disaster) .......................... 172
Cleaning up after a failed full Samba disaster recovery backup
(clustered, complete storage pool disaster) .......................... 173
Cleaning up after a failed full third-party product disaster
recovery backup (clustered, complete storage pool
disaster) ....................................................................... 173
(Conditional) Recreate your topology information ............................ 174
(Conditional) Recreating the topology with current topology
information (Samba or third-party, clustered recovery) ......... 174
(Conditional) Recreating the topology without current topology
information (Samba or third-party, clustered recovery) ......... 174
Running the DR_Restore_all_script to recover the data ..................... 175
Recovering a PureDisk clustered storage pool from a NetBackup
disaster recovery backup ................................................. 176
Recovering a PureDisk clustered storage pool from a Samba
disaster recovery backup ................................................. 183
Recovering a PureDisk clustered storage pool from a third-party
product disaster recovery backup ...................................... 191

Chapter 8 Storage pool authority replication (SPAR) ................... 199


About storage pool authority replication (SPAR) .............................. 199
Disaster recovery strategies ................................................... 201
Activating the local storage pool ................................................... 202
12 Contents

Enabling SPAR backups ............................................................... 204


Completing the General tab for a SPA Replication ...................... 205
Completing the Scheduling tab for a SPA Replication policy ......... 206
Completing the Parameters tab for a SPA Replication
policy ........................................................................... 206
Running a SPAR policy manually .................................................. 206
Restoring from a SPAR backup ..................................................... 207
About the RestoreSPASIO command ........................................ 209
Upgrading PureDisk with SPAR enabled ................................... 211

Chapter 9 Reports ................................................................................. 213


About reports ............................................................................ 213
Permissions and guidelines for running and viewing reports .............. 214
Reports for a running job ............................................................. 215
Examining a running job ........................................................ 215
Restarting a backup job ......................................................... 216
About policies and workflows ....................................................... 216
Types of workflows ............................................................... 216
Workflows in policies ............................................................ 217
Obtaining detailed job reports ....................................................... 218
General tab for a Job Details report .......................................... 219
Details tab for a Job Details report ........................................... 219
Statistics tab for a Job Details report ........................................ 219
Files tab for a Job Details report .............................................. 229
Errors tab for a Job Details report ............................................ 230
Job log tab for a Job Details report ........................................... 230
About Data mining reports ........................................................... 233
Enabling a data mining policy ....................................................... 234
Completing the General tab for a data mining policy ................... 234
Completing the Scheduling tab for a data mining policy ............... 235
Completing the Parameters tab for a data mining policy .............. 235
Running a data mining policy manually .......................................... 236
Obtaining data mining policy output - the data mining report ............. 236
Interpreting the storage pool data reduction factor ..................... 238
Effect of compression on data reduction ................................... 238
Effects of segmentation on data reduction ................................ 238
Obtaining data mining policy output - the Web service report ............. 239
Web service reports .................................................................... 242
Job status Web service reports ................................................ 243
Dashboard Web service reports ............................................... 246
Importing report output into a spreadsheet ............................... 249
About Dashboard reports ............................................................. 249
Contents 13

Displaying the Capacity dashboard .......................................... 250


Displaying the Activity dashboard ........................................... 251
Displaying the Server agent dashboard ..................................... 252
Displaying the Client agent dashboard ..................................... 253
Central storage pool authority reports ............................................ 254
Displaying the Central Reporting dashboard .............................. 254
Updating the Central Reporting dashboard ............................... 259

Chapter 10 Log files and auditing ....................................................... 261


About the log file directory ........................................................... 261
Content router log files .......................................................... 262
Metabase engine log file ........................................................ 265
Workflow engine log file ........................................................ 268
Server agent log files ............................................................. 270
About international characters in log files ................................ 272
Audit trail reporting ................................................................... 273
Setting debugging mode .............................................................. 274
Enabling debugging mode ...................................................... 275
Disabling debugging mode ..................................................... 276
Removing temporary debugging files ....................................... 276

Chapter 11 Storage pool management .............................................. 279


About storage pool management ................................................... 280
About adding services ................................................................. 280
Adding a service to a node ............................................................ 282
Adding a new service on an existing node ................................. 282
Adding a new node and at least one new service on the new
node ............................................................................ 284
Verifying and specifying content router capacity ....................... 285
Adding a new passive node to a cluster ..................................... 286
Activating a new service in the storage pool .................................... 287
Rerouting a content router and managing content routers ................. 288
Planning for a new content router ........................................... 289
Permissions for rerouting ...................................................... 290
Disaster recovery backups and rerouting .................................. 290
Data replication policies and rerouting ..................................... 290
Activating and deactivating content routers .............................. 290
Alleviating content router congestion ...................................... 291
Parallel and serial rerouting examples ...................................... 292
Rerouting the content routers ................................................. 294
Troubleshooting a content router rerouting job .......................... 295
Deactivating a service ................................................................. 296
14 Contents

Preparing to deactivate a content router ................................... 296


Deactivating a content router or NetBackup export engine ........... 298
Managing license keys ................................................................. 299
About central reporting ............................................................... 300
Enabling a storage pool as a central storage pool ........................ 301
Adding a remote storage pool to a central storage pool ................ 301
Disabling central reporting .................................................... 302
Managing storage pools configured in the central storage
pool ............................................................................. 303
Rerouting a metabase engine ........................................................ 304
(Optional) Gathering metabase engine capacity
information ................................................................... 305
Preparing clients for rerouting ................................................ 305
Preparing the old metabase engine for rerouting ........................ 307
Adding the new metabase engine and recording its address .......... 307
Rerouting the agents on the metabase engine ............................ 308
Restarting the agent ............................................................. 309
Verifying a metabase engine rerouting ..................................... 309
Troubleshooting ................................................................... 310
About clustered storage pool administration ................................... 310
Changing the PDOS administrator’s password ................................. 310
Changing the PureDisk internal database and the LDAP administrator
passwords ........................................................................... 311
Increasing the number of client connections ................................... 311
Adjusting the clock on a PureDisk node .......................................... 312
Adjusting the Web UI time-out interval .......................................... 314
Stopping and starting processes on one PureDisk node
(unclustered) ....................................................................... 314
Stopping all services ............................................................. 314
Starting all services .............................................................. 315
Starting all services without rebooting ..................................... 315
Stopping and starting individual services .................................. 315
Stopping and starting processes on one PureDisk node
(clustered) ........................................................................... 317
Stopping and starting processes in a multinode PureDisk storage
pool ................................................................................... 318
Restarting the Java run-time environment ...................................... 319

Chapter 12 Reconfiguring your PureDisk environment .................. 321


About the configuration files ........................................................ 321
Examining configuration settings .................................................. 322
Editing the configuration files with the Web UI ................................ 322
Contents 15

Making a copy of a value set ................................................... 323


Navigating to a value in the configuration file copy .................... 323
Changing a configuration file value or deleting a configuration
file value ....................................................................... 324
Assigning the template and, optionally, pushing the configuration
file changes ................................................................... 325
Editing the configuration files with a text editor .............................. 326
Updating the agent configuration files on a client ............................ 327

Chapter 13 Tuning and optimization .................................................. 331


Tuning backup and restore performance ......................................... 331
Editing an agent configuration file to improve backup and restore
performance .................................................................. 332
Editing an agent configuration file to accommodate large
backups ........................................................................ 333
Multistreamed (parallel) backups ............................................ 334
Multistreamed (parallel) restores ............................................ 335
Segmentation options for backup jobs ...................................... 336
Unexpected results ............................................................... 337
Tuning replication performance .................................................... 337

Appendix A Installing the clustering software .................................. 341


About the Veritas Cluster Server (VCS) software installation .............. 341
(Conditional) Examining the NICs for the private heartbeats .............. 343
Examining the NICs in this node for addressing ......................... 343
(Conditional) Removing addressing from the private heartbeat
NICs ............................................................................. 346
Synchronizing passwords ............................................................ 347
Generating the authentication key on each node ........................ 347
Collect the SSH public keys .................................................... 348
Distributing the key file ......................................................... 349
(Optional) Verifying the SSH access ......................................... 350
Installing the Veritas Cluster Server (VCS) software .......................... 351
Installing VCS 4.1 MP3 .......................................................... 351
Installing VCS 4.1 MP4 and VCS 4.1 MP4RP3 ............................. 356
Configuring VCS ........................................................................ 360
(Conditional) Using YaST to create the storage partitions .................. 365
Starting YaST ...................................................................... 365
Creating the storage partitions ............................................... 367

Appendix B Command Line Interface options for PureDisk ........... 369


16 Contents

Appendix C Third-party legal notices .................................................. 377


Third-party legal notices for Symantec NetBackup PureDisk .............. 377

Index ................................................................................................................... 379


Chapter 1
External directory service
authentication
This chapter includes the following topics:

■ About external directory service authentication

■ Obtaining directory service information

■ (Optional) Adding PureDisk groups to your directory service

■ (Optional) Verify TLS and copy the CA certificate

■ Linking PureDisk to the external directory service

■ Enabling the PureDisk system policy that synchronizes PureDisk with an


external directory service

■ About maintaining synchronization between PureDisk and an external directory


service

■ Adding, changing, or deleting users or groups

■ Changing the youruserclass, yourloginattrib, or yournameattrib variables in


your directory service’s ldap.xml file

■ Changing the yourdescriptionattrib variable or the yourmailattrib variable in


your directory service’s ldap.xml file

■ Disabling external authentication

■ Changing the TLS specification

■ Modifying the base search path


18 External directory service authentication
About external directory service authentication

About external directory service authentication


By default, PureDisk authenticates users through its internal OpenLDAP directory
service. If you want to use only PureDisk’s internal OpenLDAP directory service
for user authentication, you do not need to perform any additional configuration.
Alternatively, you can configure your site’s external OpenLDAP or Active Directory
service for user authentication. PureDisk requires the following levels:
■ OpenLDAP, version 2.3.27
■ Active Directory, Microsoft Server 2008, Service Pack 1
■ Active Directory, Microsoft Server 2003, Service Pack 1
■ Active Directory, Microsoft Server 2000, Service Pack 4
The following process explains the tasks you need to complete to configure external
directory service authentication.
To configure external directory service configuration
1 Obtain directory service information.
See “Obtaining directory service information” on page 19.
2 (Optional) Add PureDisk groups to your directory service.
See “(Optional) Adding PureDisk groups to your directory service” on page 28.
3 (Optional) Verify transport layer security (TLS) and copy the certificate
authority’s certificate.
See “(Optional) Verify TLS and copy the CA certificate” on page 29.
4 Link PureDisk to the external directory service.
See “Linking PureDisk to the external directory service” on page 33.
5 Enable the PureDisk policy that synchronizes PureDisk with the directory
service.
See “Enabling the PureDisk system policy that synchronizes PureDisk with
an external directory service” on page 40.

Assumptions
The procedures you need to perform to configure external authentication assume
that you are familiar with how your site’s OpenLDAP or Active Directory service
is organized. The procedures also assume that your site’s directory service
administrator can provide you with information about how the directory service
is configured.
External directory service authentication 19
Obtaining directory service information

User accounts
The following information pertains to user accounts when external authentication
is enabled:
■ The Edit LDAP Server Configuration screen in the Web UI includes a checkbox
labeled Enable LDAP Authentication. When this box is checked, PureDisk
authenticates through an external directory service. When this box is
unchecked, PureDisk authenticates through its internal OpenLDAP directory
service. You cannot merge these directory services.
PureDisk can use either its internal directory service or your external directory
service, but it cannot use both at the same time. When PureDisk is configured
to authenticate through its internal directory service, only its local user
accounts are valid. However, when PureDisk is configured to use an external
directory service, only the accounts from that external directory service are
valid.
■ If the external service is down, you can authenticate from PureDisk’s internal
OpenLDAP service. For example, the external directory service may be down.
If you try to synchronize the external directory service, the job that runs the
system policy for syncing external ldap users fails.
■ If you want to add PureDisk users and groups, add them in your directory
service and import them into PureDisk. When authentication through an
external directory service is enabled, you cannot create users and groups
directly in PureDisk.
After you import users and groups from the external directory service, you
need to grant PureDisk permissions to those users and groups.
■ You cannot import a user with the root login property from an external
directory service. The root users for both PureDisk and for the external
directory service are always present and are always unique. By default, the
PureDisk root user’s permissions and privileges are always the same. They
remain the same regardless of whether authentication is through PureDisk’s
internal directory service or through an external directory service.

Obtaining directory service information


Your site’s directory service administrator can help you gather the information
that you need to configure PureDisk to perform external user authentication. This
administrator can also help you analyze your site’s existing authentication
configuration.
The following procedure explains the information that you need to obtain from
your site’s directory service administrator.
20 External directory service authentication
Obtaining directory service information

To obtain information about your site’s directory service


1 Obtain general information from your site’s external directory service
administrator.
The following table summarizes the information that you need to obtain. You
can use the right column of the table to make notes about the requirements.

Item needed Site-speecific value

Directory service host fully qualified __________________________________________


domain name (FQDN) (preferred), host
name, or IP address.

Port number to the server that hosts the __________________________________________


directory service

The ldapsearch(1) command and its __________________________________________


parameters that PureDisk needs to obtain
a listing.

Copy of the certificate authority file. __________________________________________

The directory service administrator needs __________________________________________


to put a copy of this file on the storage
pool authority.

Common name of the certificate authority __________________________________________


file.

2 (Conditional) Obtain TLS information from your site’s external directory


service administrator.
Perform this step if your site requires TLS.
If your site has an Active Directory service, you need to convert the certificate
file you receive to PEM format. Instructions later in this process explain when
and how to perform the conversion.
3 Examine a listing from your directory service.
Each OpenLDAP or Active Directory service is unique. The directory services
themselves, their structures, and their schemas are site-specific and unique
to their purpose. For this reason, you must examine the object classes and
attributes in your directory service and map them to the PureDisk
configuration information screens.
The following examples show directory service listings:
■ See “Example Active Directory service” on page 21.
External directory service authentication 21
Obtaining directory service information

■ See “Example OpenLDAP directory service” on page 24.

4 Save your directory listing.


You need to use this listing later in the configuration process.
5 Proceed to one of the following topics:
■ If you want to add PureDisk user groups to your directory service at this
time, proceed to the following topic:
See “(Optional) Adding PureDisk groups to your directory service”
on page 28.
■ If TLS is required at your site, proceed to the following topic:
See “(Optional) Verify TLS and copy the CA certificate” on page 29.
■ Otherwise, proceed to the following topic:
See “Linking PureDisk to the external directory service” on page 33.

Example Active Directory service


Table 1-1 shows the structure of an example Active Directory service.

Table 1-1 Active Directory service structure

Domain controller Domain controller Organization units Common names

dc=com

dc=acme

ou=users

cn=Alice Munro

cn=Bob Cratchit

cn=Claire Clairmont

cn=Dave Bowman

ou=groups

cn=chicago

cn=atlanta

This directory service has two organizational units: users and groups.
You can use the ldapsearch(1) command to obtain a listing of this directory
service. The command to obtain a listing of users and groups is as follows:
22 External directory service authentication
Obtaining directory service information

# ldapsearch -H ldap://100.100.100.101:389 -x \
-D "cn=Alice Munro,ou=users,dc=acme,dc=com" -W \
-b dc=acme,dc=com "(objectClass=*)" > /tmp/example.txt

If more directory entries exist in the same directory subtree, a command such as
the preceding example returns information about more than users and groups.
The command writes its output to file example.txt. In the example file that
follows, characters in bold represent definitions from this file that you need later
in the configuration process:

# extended LDIF
#
# LDAPv3
# base <dc=acme,dc=com> with scope subtree # base search path
# filter: (objectClass=*)
# requesting: ALL
#

# Alice Munro, users, acme.com


dn: CN=Alice Munro,OU=users,DC=acme,DC=com
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: user # youruserclass
objectClass: inetOrgPerson
cn: Alice Munro # yournameattrib
sn: Munro
description: Alice's Description # yourdescriptionattrib
givenName: Alice
distinguishedName: CN=Alice Munro,OU=users,DC=acme,DC=com
displayName: Alice
memberOf: CN=chicago,OU=groups,DC=acme,DC=com
uSNChanged: 21751
name: Alice Munro
sAMAccountName: alice.munro # yourloginattrib
userPrincipalName: alice.munro@acme.com
mail: alice.munro@acme.com # yourmailattrib

# Bob Cratchit, users, acme.com


dn: CN=Bob Cratchit,OU=users,DC=acme,DC=com
objectClass: top
objectClass: person
objectClass: organizationalPerson
External directory service authentication 23
Obtaining directory service information

objectClass: user
objectClass: inetOrgPerson
cn: Bob Cratchit
sn: Cratchit
description: Bob's Description
givenName: Bob
distinguishedName: CN=Bob Cratchit,OU=users,DC=acme,DC=com
displayName: Bob Cratchit
memberOf: CN=chicago,OU=groups,DC=acme,DC=com
name: Bob Cratchit
sAMAccountName: bob.cratchit
userPrincipalName: bob.cratchit@acme.com
mail: bob.cratchit@acme.com

# Claire Clairmont, users, acme.com


dn: CN=Claire Clairmont,OU=users,DC=acme,DC=com
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: user
objectClass: inetOrgPerson
cn: Claire Clairmont
sn: Clairmont
description: Claire's Description
givenName: Claire
distinguishedName: CN=Claire Clairmont,OU=users,DC=acme,DC=com
displayName: Claire Clairmont
memberOf: CN=atlanta,OU=groups,DC=acme,DC=com
name: Claire Clairmont
sAMAccountName: claire.clairmont
userPrincipalName: claire.clairmont@acme.com
mail: claire.clairmont@acme.com

# Dave Bowman, users, acme.com


dn: CN=Dave Bowman,OU=users,DC=acme,DC=com
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: user
objectClass: inetOrgPerson
cn: Dave Bowman
sn: Bowman
description: Dave's Description
24 External directory service authentication
Obtaining directory service information

givenName: Dave
distinguishedName: CN=Dave Bowman,OU=users,DC=acme,DC=com
displayName: Dave Bowman
memberOf: CN=atlanta,OU=groups,DC=acme,DC=com
name: Dave Bowman
sAMAccountName: dave.bowman
userPrincipalName: dave.bowman@acme.com
mail: dave.bowman@acme.com

# chicago, groups, acme.com


dn: CN=chicago,OU=groups,DC=acme,DC=com
objectClass: top
objectClass: group # yourusergroupclass
sAMAccountName: chicago # yournameattrib
member: CN=Bob Cratchit,OU=users,DC=acme,DC=com # yourmemberattrib
member: CN=Alice Munro,OU=users,DC=acme,DC=com
distinguishedName: CN=chicago,OU=groups,DC=acme,DC=com
name: chicago
sAMAccountName: chicago

# atlanta, groups, acme.com


dn: CN=atlanta,OU=groups,DC=acme,DC=com
objectClass: top
objectClass: group
sAMAccountName: atlanta
member: CN=Dave Bowman,OU=users,DC=acme,DC=com
member: CN=Claire Clairmont,OU=users,DC=acme,DC=com
distinguishedName: CN=atlanta,OU=groups,DC=acme,DC=com
name: atlanta
sAMAccountName: atlanta

# search result
search: 7
result: 0 Success

# numResponses: 7
# numEntries: 6

Example OpenLDAP directory service


Table 1-2 shows the structure of an example OpenLDAP directory service.
External directory service authentication 25
Obtaining directory service information

Table 1-2 OpenLDAP directory service structure

Domain controller Domain controller Organizational Common names


units

dc=com

dc=marlins

ou=commuters

cn=Florence
Leeds

cn=Mary Evans

cn=Diana Goyer

cn=Adam Smith

cn=Eric Meyer

cn=Joe McKinley

ou=groups

cn=bikers

cn=drivers

This directory service has two organizational units: commuters and groups.
You can use the ldapsearch(1) command to obtain a listing of this directory
service. The command to obtain a listing of the users and groups is as follows:

# ldapsearch -H ldap://100.100.100.100:389/ -x \
-D "cn=Diana Goyer,ou=commuters,dc=marlins,dc=com" -W \
-b "dc=marlins,dc=com" "(objectClass=*)">/tmp/example.txt

This example writes its output to file example.txt. In the example that follows,
characters in bold represent the definitions that you need later in the configuration
process. The external directory service authentication configuration procedures
use examples from this listing. File example.txt is as follows:

# extended LDIF
#
# LDAPv3
# base <dc=marlins,dc=com> with scope subtree # base search path
# filter: (objectClass=*)
26 External directory service authentication
Obtaining directory service information

# requesting: ALL
#

# marlins.com
dn: dc=marlins,dc=com
dc: marlins
objectClass: domain

# commuters, marlins.com
dn: ou=commuters,dc=marlins,dc=com
ou: commuters
objectClass: organizationalUnit

# groups, marlins.com
dn: ou=groups,dc=marlins,dc=com
ou: groups
objectClass: organizationalUnit

# Florence Leeds, commuters, marlins.com


dn: cn=Florence Leeds,ou=commuters,dc=marlins,dc=com
mail: Florence.Leeds@marlins.com # yourmailattrib
uid: fleeds # yourloginattrib
objectClass: inetOrgPerson # youruserclass
sn: Leeds
cn: Florence Leeds # yournameattrib
userPassword:: cGFzc3dvcmQ=

# Mary Evans, commuters, marlins.com


dn: cn=Mary Evans,ou=commuters,dc=marlins,dc=com
mail: Mary.Evans@marlins.com
uid: mevans
objectClass: inetOrgPerson
sn: Evans
cn: Mary Evans
userPassword:: cGFzc3dvcmQ=

# Diana Goyer, commuters, marlins.com


dn: cn=Diana Goyer,ou=commuters,dc=marlins,dc=com
mail: Diana.Goyer@marlins.com
uid: dgoyer
objectClass: inetOrgPerson
sn: Goyer
cn: Diana Goyer
External directory service authentication 27
Obtaining directory service information

userPassword:: cGFzc3dvcmQ=

# Adam Smith, commuters, marlins.com


dn: cn=Adam Smith,ou=commuters,dc=marlins,dc=com
mail: Adam.Smith@marlins.com
uid: asmith
objectClass: inetOrgPerson
sn: Smith
cn: Adam Smith
userPassword:: cGFzc3dvcmQ=

# Eric Meyer, commuters, marlins.com


dn: cn=Eric Meyer,ou=commuters,dc=marlins,dc=com
mail: Eric.Meyer@marlins.com
uid: emeyer
objectClass: inetOrgPerson
sn: Meyer
cn: Eric Meyer
userPassword:: cGFzc3dvcmQ=

# Joe McKinley, commuters, marlins.com


dn: cn=Joe McKinley,ou=commuters,dc=marlins,dc=com
mail: Joe.McKinley@marlins.com
uid: jmckinley
objectClass: inetOrgPerson
sn: McKinley
cn: Joe McKinley
userPassword:: cGFzc3dvcmQ=

# bikers, groups, marlins.com


dn: cn=bikers,ou=groups,dc=marlins,dc=com
objectClass: groupOfNames # yourusergroupclass
cn: bikers # yournameattrib
member: cn=Florence Leeds,ou=commuters,dc=marlins,dc=com # yourmemberattrib
member: cn=Mary Evans,ou=commuters,dc=marlins,dc=com
member: cn=Diana Goyer,ou=commuters,dc=marlins,dc=com

# drivers, groups, marlins.com


dn: cn=drivers,ou=groups,dc=marlins,dc=com
objectClass: groupOfNames
cn: drivers
member: cn=Adam Smith,ou=commuters,dc=marlins,dc=com
member: cn=Eric Meyer,ou=commuters,dc=marlins,dc=com
28 External directory service authentication
(Optional) Adding PureDisk groups to your directory service

member: cn=Joe McKinley,ou=commuters,dc=marlins,dc=com

# search result
search: 2
result: 0 Success

# numResponses: 12
# numEntries: 11

(Optional) Adding PureDisk groups to your directory


service
Perform this procedure if you want to create PureDisk user groups at this time.
You can perform this procedure either now or after you complete the external
directory service authentication configuration. In this procedure, you create users
and groups that are specific to PureDisk operations.
You can configure PureDisk to authenticate some or all of the user groups that
are defined in your directory service. After you add, delete, or change user or
group information in your directory service, run the PureDisk system policy. This
policy synchronizes external directory service users with PureDisk’s internal
directory service. You can run the synchronization policy at any time after your
initial configuration. If you configure users and groups specific to PureDisk now,
you can see them in the Web UI immediately after you complete the configuration.
To create PureDisk users and groups
1 Edit your directory service and add one or more of the following typical
PureDisk user groups:
■ Administrators. Users with full administrative privileges.
■ Users. Users who can back up and restore data.
■ Reporters. Users who can run reports.
■ Installers. Users who can install PureDisk agents on client systems.
■ Backup operators. Users who can back up files for one storage pool but
cannot restore or delete.
The following manual includes information about the different types of
permissions that you can grant to users and user groups:
See the PureDisk Client Installation Guide.
2 Proceed to one of the following topics:
■ If TLS is required at your site, proceed to the following topic:
External directory service authentication 29
(Optional) Verify TLS and copy the CA certificate

See “(Optional) Verify TLS and copy the CA certificate” on page 29.
■ Otherwise, proceed to the following topic:
See “Linking PureDisk to the external directory service” on page 33.

(Optional) Verify TLS and copy the CA certificate


You can use TLS during the user authentication process, but PureDisk does not
require TLS. If TLS is required, perform the following procedures. The directory
service needs to authenticate to the storage pool authority. PureDisk does not
support mutual authentication for situations in which the Active Directory server
and the storage pool authority need to authenticate to each other.
To verify TLS and copy the CA certificate
1 Verify the server name.
See “Verifying the server name” on page 29.
2 Write the certificate to the storage pool authority.
See “Writing the certificate to the storage pool authority” on page 30.
3 Proceed to the following topic:
See “Linking PureDisk to the external directory service” on page 33.

Verifying the server name


The following procedure explains how to verify the server name in the TLS
certificate. When you use TLS, specify the common name whenever a procedure
requires you to provide the LDAP Server Host Name. Unresolvable common
names are the most common cause of server certificate errors.
To verify the server name in the certificate
1 Obtain the certificate’s common name from your directory service
administrator.
2 Log into the storage pool authority node (unclustered) or storage pool
authority service (clustered) as root.
30 External directory service authentication
(Optional) Verify TLS and copy the CA certificate

3 Verify the common name by issuing a ping(8) command against the


certificate’s common name.
For example:

# ping mn.north.stars
PING mn.north.stars (100.100.100.101) 56(84) bytes of data.
64 bytes from mn.north.stars (100.100.100.101): icmp_seq=1 ttl=64 time=4.71 ms
64 bytes from mn.north.stars (100.100.100.101): icmp_seq=2 ttl=64 time=0.353 ms

4 (Conditional) Edit the /etc/hosts file.


Perform this step only if the previous step’s ping(8) command was
unsuccessful.
Add a line in the following format to the end of the /etc/hosts file:

ip_addr_of_external_directory_services common_name

For example, if the common name of your directory service certificate is


mn.north.stars, the line should look like the following:

100.100.100.101 mn.north.stars

5 Proceed to the following:


See “Writing the certificate to the storage pool authority” on page 30.

Writing the certificate to the storage pool authority


The following procedure explains how to write the certificate to the storage pool
authority node and verify the server-side authentication.
To write the certificate to the storage pool authority
1 Log into the storage pool authority node (unclustered) or storage pool
authority service (clustered) as root.
2 Copy one of the following certificate files to the appropriate directory on the
storage pool authority:
■ The cacert.pem file from an OpenLDAP server. cacert.pem is the default
name for this file when it is generated by an OpenLDAP server. This file
name can be different at your site.
■ The generated certificate from an Active Directory server.
The appropriate directory for the certificate file is as follows:
External directory service authentication 31
(Optional) Verify TLS and copy the CA certificate

■ In an unclustered storage pool, Symantec recommends that you write the


certificate file to the /var/ldap/certstore/ directory, but you can write
this file to any directory.
■ In a clustered storage pool, Symantec recommends that you log into the
storage pool authority and write the certificate file to /Storage/var/keys.
When you write the certificate file to /Storage, you ensure that the
certificate moves with the storage pool authority when a failover occurs.
You can use any file transfer program to copy the files.
3 (Conditional) Use the openssl(1) command and the x509(1) utility to convert
the Active Directory certificate to PEM format.
Perform this step only if an Active Directory service generated the certificate.
You do not have to perform this step if an OpenLDAP directory service
generated the certificate.
The openssl(1) command and the x509(1) utility make the file compatible
with OpenSSL.
Type the following command:

# /usr/bin/openssl x509 -inform DER -outform PEM -in file.cer -out file.pem

This command’s variables are as follows:

file.cer The name of the file that contains the Active Directory
certificate. This file name ends in .cer. Obtain this file from
your site’s directory service administrator.

file.pem The name of your certificate in the format that is compatible


with OpenSSL. This file ends in .pem.

4 Use the openssl(1) command and the s_client(1) program to test the port
connections and to verify that the SSL certificate operates correctly.
Type the following command:

# /usr/bin/openssl s_client -connect FQDN:port -showcerts -state -CAfile cert_loc

This command’s variables are as follows:


32 External directory service authentication
(Optional) Verify TLS and copy the CA certificate

FQDN: port The directory service server’s FQDN and the port from which
you imported the certificate. This variable takes the format
FQDN:port . Specify the following values:

■ For FQDN, specify the FQDN of the directory service server.


■ For port, specify the port that the directory service server
uses for incoming communication. By default, the value is
636.

For example blink.acme.com:636.

cert_loc Specify the absolute path to the certificate file. This file is the
one that you copied in step step 2.

For example: /var/ldap/certstore/mycertfile.pem.


External directory service authentication 33
Linking PureDisk to the external directory service

5 Use the ldapsearch(1) command to test the connection between the storage
pool authority and the directory service server.
The connection needs to be open to allow continued authentication activities.
Type the command as follows:
This command has the following format:

# /usr/bin/ldapsearch -H ldaps://ds_serv:port -x -D "uid" -W -b base "(filter)"

For example:

# /usr/bin/ldapsearch -H ldaps://100.100.100.101:636 -x \
-D "cn=Alice Munro,ou=users,dc=acme,dc=com" -W \
-b ou=groups,dc=acme,dc=com "(objectClass=group)"

The command’s variables are as follows:

ds_serv The FQDN, host name, or IP address of the directory service


server. Symantec recommends that you specify an FQDN.

port The port the PureDisk uses for TLS communication. This
port is the one where the external OpenLDAP server runs
ldaps. By default, the value is 636.

uid The distinguished name of the test user with which to bind.

base The base search path.

filter An object class name. The command searches for this object
class name as a test.

If the connection is open, it displays a listing of directory service server


information. If the command fails or generates messages, fix the connectivity
problems before you proceed.
6 Proceed to the following topic:
See “Linking PureDisk to the external directory service” on page 33.

Linking PureDisk to the external directory service


Use the following procedures to link PureDisk and your external Active Directory
or OpenLDAP directory service.
34 External directory service authentication
Linking PureDisk to the external directory service

To link PureDisk to the external directory service


1 Configure communication in the PureDisk Web UI.
See “Configuring communication” on page 34.
2 Manage user groups.
See “Managing user groups” on page 39.

Configuring communication
The following procedure explains how to configure communication in the PureDisk
Web UI.
To configure communication in the PureDisk Web UI
1 Display the storage pool authority opening screen.
Open a browser window and type the following URL:

https://URL

For URL, specify the URL to the storage pool authority. For example, in an
all-in-one environment, this value is the URL of the PureDisk node upon
which you installed all the PureDisk software. For example,
https://acme.mnbe.com.

2 Type your user name and password at the prompts on the login screen.
3 Click Settings > Configuration.
4 In the left pane, click the plus (+) sign to the left of LDAP server.
5 Select External LDAP.
The LDAP Server Configuration properties appear in the right pane.
6 Complete the Connection tab.
See “Completing the Connection tab” on page 34.
7 Complete the Mapping tab.
See “Completing the Mapping tab” on page 37.
8 Enable user group management.
See “Managing user groups” on page 39.

Completing the Connection tab


The following procedure explains how to complete the fields on the Connection
tab.
External directory service authentication 35
Linking PureDisk to the external directory service

To complete the Connection tab


1 (Optional) Check the Enable LDAP Authentication box.
Perform this step if you want to enable an external OpenLDAP service or an
external Active Directory service. A check in this box determines whether
PureDisk uses the default PureDisk OpenLDAP directory service or whether
PureDisk uses an external directory service, as follows:
■ If you want to enable PureDisk's internal OpenLDAP directory service,
clear this box. By default, this box is clear.
For example, if the external directory service goes down, clear this box.
When this box is clear, PureDisk uses its internal authentication
mechanism, and users can continue to use PureDisk while the external
directory service is down.
■ If you want to enable an external OpenLDAP service or an external Active
Directory service, check this box.

2 Specify the LDAP Server Host Name.


The content of this field differs depending on whether or not you enabled
transport layer security (TLS), as follows:
■ When TLS is enable, specify the FQDN of the external OpenLDAP server.
Note that this FQDN must match the common name that you specified
when you created the certificate of your external OpenLDAP server. If
you do not have this information yet, obtain this name from your directory
service administrator.
■ When TLS is not enabled, specify the FQDN or IP address of the server
upon which your external OpenLDAP or Active Directory service resides.
If you want to enable single port communication between the nodes and
clients in your PureDisk environment, Symantec recommends that you
use an FQDN for this field. For example: blinkie.acme.com.

3 Verify, and respecify if needed, the Port number that connects PureDisk to
the external OpenLDAP or Active Directory service.
The port to specify depends on whether TLS is enabled, as follows:
■ When TLS is enabled, the default security port is 636.
■ When TLS is not enabled, the default port is 389.

4 Check or clear the Enable TLS for LDAP Communication box.


A check in this box determines whether TLS is enabled. TLS encrypts data
transactions between your external directory service and PureDisk.
36 External directory service authentication
Linking PureDisk to the external directory service

5 (Conditional) Specify the CA Certificate Location.


Perform this step if you checked Enable TLS for LDAP Communication.
Specify the full path to the file that contains the certificate authority that
PureDisk can use to verify the connection to the external directory service
server. This is the file you copied over from the external directory service
server to the storage pool authority, and typically, this file resides in
/var/ldap/certstore/. Specify the location you configured in the following
procedure:
See “(Optional) Verify TLS and copy the CA certificate” on page 29.
This file must reside on the storage pool authority in PEM format.
6 Specify the User Distinguished Name.
The distinguished name of an external directory service administrator
account. This administrator account does not need to have full privileges on
the directory service server. PureDisk requires only the search privilege.
Contact your site’s directory service administrator to obtain this value.
7 In the Password field, specify the password for the User Distinguished Name's
account.
Contact your site's directory service administrator to obtain this password.
8 In the Service Type pull-down menu, select either Active Directory or
OpenLDAP.
9 Specify the external directory service's Base Search Path.
This path specifies a directory in the external directory service beneath which
PureDisk beneath which PureDisk can find the entries that define all user
groups. Type the distinguished name into this field. Specify the highest entry
that is a common ancestor of the groups that you need.
Note that it is in a later procedure that you specify the individual groups that
define the users that you want to authenticate as PureDisk users. This later
procedure is as follows:
See “To manage user groups” on page 40.
For example, assume that you want PureDisk to use the groups from the
OpenLDAP marlins.com directory service. To specify all groups, examine the
ldapsearch(1) command output and complete the following steps:

■ Type the following into the box under the Base Search path label:

dc=marlins,dc=com

■ Click Add.
External directory service authentication 37
Linking PureDisk to the external directory service

You can specify only one search path.


10 Proceed to the following:
See “Completing the Mapping tab” on page 37.

Completing the Mapping tab


The following procedure explains how to complete the fields on the Mapping tab.
The examples in this procedure refer to the values in examples.txt described in
the following:
See “Example OpenLDAP directory service” on page 24.
To complete the Mapping tab
1 Click the Mapping tab.
2 Gather the directory service listing that you obtained when you completed
the following procedure:
See “Obtaining directory service information” on page 19.
If you have not yet obtained a listing, generate one now. Keep this output
available to you. You might need the information in the listing to complete
the Mapping tab.
For example, for a small configuration, obtain a copy of the ldapsearch(1)
output. For a large configuration, you can use a directory browser such as
the one at the following location:
http://www.anl.gov/techtransfer/Software_Shop/LDAP/LDAP.html
The Symantec Corporation does not endorse, guarantee, or recommend any
particular LDAP browser.
38 External directory service authentication
Linking PureDisk to the external directory service

3 Complete the fields under the User attributes heading.


The following table explains how to complete these fields.

User object class Specify the attribute that defines the


object class for users on your external
directory service.

Active Directory example: user.

OpenLDAP example: inetOrgPerson.

Login attribute Specify the attribute that uniquely


identifies a user. The user ID for each user
is unique.

Active Directory example:


sAMAccountName.

OPenLDAP example: uid.

Name attribute Specify the attribute that defines a user's


real name. The is the login ID of a user.

Active Directory example: cn.

OpenLDAP example: cn.

Description attribute (Optional) Specify the attribute for the


descriptive field. Some directory services
do not have this attribute.

Active Directory example: description.

OpenLDAP example: description.

Email address attribute (Optional) Specify the attribute that


describes the user's email address.

Active Directory example: mail.

OpenLDAP example: mail.


External directory service authentication 39
Linking PureDisk to the external directory service

4 Complete the fields under the Group attributes heading.


The following table explains how to complete these fields.

Group object class Specify the attribute that defines the


object class for groups on your external
directory service.

Active Directory example: group.

OpenLDAP example: groupOfNames.

Membership attribute Specify the attribute that defines each


member in the group.

Active Directory example: member.

OpenLDAP example: member.

Name attribute Specify the attribute that defines a user


group.

Active Directory example:


sAMAccountName.

OpenLDAP example: cn.

5 Click Save.
6 (Conditional) On the storage pool authority, edit the /etc/hosts file and add
a line that allows the storage pool authority service to resolve the directory
server using the name of this certificate.
Perform this step if you enabled TLS.
For example, assume that the common name of the certificate file is
blinkie.acme.com. Note that this string is not the FQDN of the server upon
which the directory resides. Add the following entry:
100.100.100.101 blinkie.acme.com

7 Proceed to the following:


See “Managing user groups” on page 39.

Managing user groups


The following procedure explains how to manage user groups.
40 External directory service authentication
Enabling the PureDisk system policy that synchronizes PureDisk with an external directory service

To manage user groups


1 Click Settings > Configuration.
2 In the left pane, click the plus (+) sign to the left of LDAP Server.
3 Select External LDAP.
4 In the right pane, click Manage User Groups.
5 Complete the Manage User Groups pane.
In this screen, complete the following steps:
■ In the left-most box, under the Distinguished Name label, type the
distinguished name of a user group into this field. For example:
CN=atlanta,OU=groups,DC=acme,DC=com or
cn=bikers,ou=groups,dc=marlins,dc=com.

■ Click Add.
■ Repeat the preceding steps to add distinguished names for all the user
groups that PureDisk needs to authenticate.
■ Click Save.
If the external directory service is down at the time you click Save,
PureDisk generates the following message:

Error: Invalid Search Path

6 Proceed to the following:


See “Enabling the PureDisk system policy that synchronizes PureDisk with
an external directory service” on page 40.

Enabling the PureDisk system policy that


synchronizes PureDisk with an external directory
service
PureDisk lets you create several different policies for backups, data removal, and
many other purposes. However, for other tasks you only need one generic policy.
PureDisk includes several system policies for these generic tasks. In most cases,
you need to enable a system policy before PureDisk can run the policy. The
procedure in this topic describes how to enable the system policy that can
synchronize the directory services. This synchronization ensures that PureDisk
recognizes the users and the groups that you add to the external directory service.
External directory service authentication 41
Enabling the PureDisk system policy that synchronizes PureDisk with an external directory service

During the synchronization, PureDisk does not synchronize user passwords. The
passwords reside only in the external directory service files.
The PureDisk Web UI does not accept empty, bank passwords. Make sure that all
users you want to authenticate through an external directory service have a
nonblank password. A user with a blank password cannot log into the PureDisk
Web UI.
To enable the system policy for synchronization
1 Click Manage > Policies.
2 In the left pane, under Miscellaneous Workflows, click the plus sign (+) next
to External LDAP server synchronization.
3 Select System policy for Syncing external LDAP users.
4 Complete the General tab.

Note: The General tab and the Scheduling tab include a Save option on each
tab. Do not click Save until you complete the fields on each tab. If you click
Save before you complete each tab, it saves the specifications you made up
to that point and closes the dialog box. To complete more fields, open the
dialog again in edit mode.

See “Completing the General tab” on page 41.


5 Complete the Scheduling tab.
See “Completing the Scheduling tab” on page 42.

Completing the General tab


This tab specifies the general characteristics for this policy.
To complete the General tab
1 (Optional) Type a new name for this policy in the Name field.
You do not need to rename this policy. The default name is System policy
for Syncing external LDAP users.
2 Select Enabled or Disabled.
This setting lets you control whether or not PureDisk runs the policy according
to the schedule you specify in the Scheduling tab.
■ If you select Enabled, PureDisk runs the policy according to the schedule.
■ If you select Disabled, PureDisk does not run the policy according to the
schedule. Disabled is the default.
42 External directory service authentication
About maintaining synchronization between PureDisk and an external directory service

For example, you might want to stop running this policy during a system
maintenance period. If you select Disabled, you do not need to enter
information in the Scheduling tab to suspend and then re-enable this
policy.

3 Specify an escalation action.


PureDisk can notify you if replication does not complete within a specified
time. For example, you can configure PureDisk to send an email message to
an individual if a replication policy workflow does not complete in an hour.
Select times in the Escalate warning after or the Escalate error and terminate
after drop-down boxes to specify the elapsed time. If the policy workflow
does not complete before the warning timeout, PureDisk generates a warning
event. If the policy workflow does not complete before the error timeout,
PureDisk generates an error event.
If you select either option, you must create a policy and an event escalation
action. These escalation actions define the email message, define its recipients,
and associate the escalation action with the policy.
For information about policy and event escalations, see the following manual:
See the PureDisk Backup Operator’s Guide.
4 Proceed to the following:
See “Completing the Scheduling tab” on page 42.

Completing the Scheduling tab


From this tab, use the drop-down lists and check boxes to specify when the policy
is to run.
To specify the schedule
1 Accept the system defaults or use the drop-down lists to specify when the
policy is to run.
2 Click Save.

About maintaining synchronization between PureDisk


and an external directory service
Over time you can change users or groups in your external directory service. As
users and groups change, run the system policy that synchronizes the external
directory service with PureDisk. For information about this policy, see the
following:
External directory service authentication 43
Adding, changing, or deleting users or groups

See “Enabling the PureDisk system policy that synchronizes PureDisk with an
external directory service” on page 40.
The following topics describe other changes you might need to make to your
authentication configuration:
See “Adding, changing, or deleting users or groups” on page 43.
See “Changing the youruserclass, yourloginattrib, or yournameattrib variables
in your directory service’s ldap.xml file” on page 44.

Adding, changing, or deleting users or groups


If you need to add, change, or delete users and groups in your external directory
service, use the procedures that your directory service provides. After you make
the changes, however, ensure that you edit the appropriate group information in
the PureDisk Web UI. Then run the system policy for synchronizing external
directory service users.
For example, assume that you remove a user group from an Active Directory
service and fail to remove that group’s distinguished name. PureDisk can generate
messages such as the following in the job log file for the synchronization job:

Start to load group cn=redfox,cn=users,dc=gerardtest,dc=local from


EXTERNAL LDAP
*** Error Message ***
0
severity: 6
server: 1000000
source: SPA-CLI_Component
description:
[2]ldap_read(): Search: No such object
*** End ***

You can use the same procedure to add or delete user groups. The procedure is as
follows:
See “Managing user groups” on page 39.
44 External directory service authentication
Changing the youruserclass, yourloginattrib, or yournameattrib variables in your directory service’s ldap.xml file

Changing the youruserclass, yourloginattrib, or


yournameattrib variables in your directory service’s
ldap.xml file
If you change the youruserclass, yourloginattrib, or yournameattrib variables
in your directory service, you need to update PureDisk’s /etc/puredisk/ldap.xml
file.
To change the youruserclass, yourloginattrib, or yournameattrib variables
1 Use your directory service’s methods to change the object class names in
your directory service source files.
2 Log into the storage pool authority node as root.
3 Edit file /etc/puredisk/ldap.xml to reflect the changes that you made to
your directory service.
4 Log into the PureDisk Web UI.
5 Click Manage > Policies.
6 Under Miscellaneous Workflows, click the plus sign (+) next to External
LDAP server synchronization.
7 Select System policy for Syncing external LDAP users.
8 In the right pane, click Run policy.

Changing the yourdescriptionattrib variable or the


yourmailattrib variable in your directory service’s
ldap.xml file
If you change the yourdescriptionattrib variable or the yourmailattrib variable
in your directory service, you need to update PureDisk’s /etc/puredisk/ldap.xml
file.
To change the yourdescriptionattrib variable or yourmailattrib variable
1 Use your directory service’s methods to change the object class names in
your directory service source files.
2 Log into the storage pool authority node as root.
3 Edit file /etc/puredisk/ldap.xml to reflect the changes that you made to
your directory service.
External directory service authentication 45
Disabling external authentication

4 Log into the PureDisk Web UI.


5 Click Settings > Configuration tab.
6 In the left pane, expand LDAP Server.
7 Select External LDAP.
8 In the right pane, click Manage User Groups.
9 Select each group.
10 Click Remove.
11 Click Save.
12 Run the system policy for synchronizing external directory service users.
Complete the following steps:
■ Click Manage > Policies.
■ In the left pane, under Miscellaneous Workflows, click the plus sign (+)
next to External LDAP server synchronization.
■ Select System policy for Syncing external LDAP users.
■ In the left pane, click Run policy.

13 Use the procedure in the following topic to add the user groups back:
See “Managing user groups” on page 39.
14 Run the system policy for synchronizing external directory service users
again.
The instructions for how to run this policy are in step 12.

Disabling external authentication


You can disable external authentication and enable PureDisk’s internal OpenLDAP
authentication. You might want to change authentication if the external directory
service is down or unavailable.

Note: If you disable external directory service authentication, PureDisk disables


TLS, too. If you re-enable external directory service authentication, remember to
re-enable TLS at that time.

To change the authentication method


1 Click Settings > Configuration.
2 In the left pane, click the plus (+) sign to the left of LDAP Server.
46 External directory service authentication
Changing the TLS specification

3 Select External LDAP.


4 In the right pane, clear the Enable LDAP Authentication box.
When this box is clear, PureDisk uses its internal directory service to
authenticate users.
5 Click Save.
6 Run the system policy for synchronizing external directory service users.
Complete the following steps:
■ Click Manage > Policies.
■ In the left pane, under Miscellaneous Workflows, click the plus sign (+)
next to External LDAP server synchronization.
■ Select System policy for Syncing external LDAP users.
■ In the right pane, click Run policy.

Changing the TLS specification


You can enable or disable your site’s TLS specification after its initial configuration.
To change the TLS specification
1 Click Settings > Configuration.
2 In the left pane, click the plus (+) sign to the left of LDAP server.
3 Select External LDAP.
4 Change the specification, as follows:
■ To enable TLS, check the Enable TLS for LDAP Communication box.
■ To disable TLS, clear the Enable TLS for LDAP Communication box.

5 Click Save.
6 Log in to the storage pool authority as root.
7 Type the following command to restart pdweb:

# /etc/init.d/puredisk restart pdweb

Modifying the base search path


You can modify the information in the Base Search Path field. You can use this
procedure if you change the hierarchy in your external directory service.
External directory service authentication 47
Modifying the base search path

To modify the base search path


1 Click Settings > Configuration.
2 In the left pane, click the plus (+) sign to the left of LDAP server.
3 Select External LDAP.
4 In the right pane, in the Base Search Path field, change the search path.
5 Click Save.
6 Run the system policy for synchronizing external directory service users.
Complete the following steps:
■ Click Manage > Policies tab.
■ In the left pane, under Miscellaneous Workflows, click the plus sign (+)
next to External LDAP server synchronization.
■ Select System policy for Syncing external LDAP users.
■ In the right pane, click Run policy.
48 External directory service authentication
Modifying the base search path
Chapter 2
Single-port communication
This chapter includes the following topics:

■ About single port communication

■ Configuring single-port communication

About single port communication


Single port communication directs all network communication through one TCP/IP
port.
By default, network communication between clients and nodes occurs on multiple
ports. This communication requires multiple open ports between hosts. You might
want to reconfigure your storage pool to use the single-port communication
feature, depending on the security requirements at your site.
PureDisk supports multiport environments. However, storage pools that you
modify to use single port communication require fewer firewall ports to be open
between PureDisk services and PureDisk clients.
You can implement single-port communication at any time, but this feature is
easier to implement right after an initial installation and before you run any
backups. The only prerequisite for this feature is that the storage pool be defined
in terms of fully qualified domain names (FQDNs). If your storage pool is defined
in terms of host names or IP addresses, perform the procedures for converting
host names or IP addresses to FQDNs that the following manual describes:
See the PureDisk Administrator’s Guide.

Configuring single-port communication


The examples in the reconfiguration procedure assume a storage pool with two
PureDisk nodes and two clients.
50 Single-port communication
Configuring single-port communication

Table 2-1 shows the example storage pool.

Table 2-1 Example environment

Entity IP address

Client 1 IP-out1

Client 2 IP-in3

Firewall IP-in4, IP-out2, IP-out3, IP-out4, IP-out5,


IP-out6, and so on.

PureDisk node 1, which hosts the following IP-in1


services:

■ Content router 1
■ Controller 1
■ Metabase engine 1
■ Metabase server
■ NetBackup export engine
■ Storage pool authority

PureDisk node 2, which hosts the following IP-in2


services:

■ Content router 2
■ Controller 2
■ Metabase engine 2

The preceding table uses the following abbreviations:


■ IP-inx is an IP address behind the firewall and in a private range. For example,
100.100.100.100 through 100.100.100.024.
■ IP-outx is an IP address outside the firewall somewhere on the internet.
To configure single-port communication
1 Analyze the ports currently used and configure the firewall.
See “Configuring your domain name server (DNS) and firewall” on page 51.
2 Add an FQDN to each PureDisk service.
See “Adding FQDNs to each service” on page 53.
3 Create the department for which you want to use single-port communication.
See “Creating a new department with single-port settings” on page 53.
Single-port communication 51
Configuring single-port communication

4 Edit the configuration and specify the single-port number.


See “Specifying port number 443 as the default port in the configuration file
template” on page 56.
5 (Conditional) Configure the single port in replication policies.
Perform this step if other storage pools replicate data to this storage pool.
See “(Conditional) Configuring port 443 in replication policies” on page 57.
6 Assign clients to the new department that uses single ports.
See “Installing agent software on the clients or moving clients” on page 58.

Configuring your domain name server (DNS) and firewall


The procedure in this topic helps you to analyze and configure your site’s DNS
and firewall.
To configure your DNS and firewall
1 On your DNS, configure the ports that the storage pool uses.
Use your firewall software’s documentation to help you determine the inbound
ports and outbound ports that are in use at this time. Perform the following
tasks:
■ Determine which services are behind the firewall.
■ Make sure that the DNS can resolve the correct IP address regardless of
whether the client is on the internet or behind the firewall.
The example storage pool uses the following ports:

DNS name (in FQDN Client 1 (outside the Client 2 (inside the
format) firewall) firewall)

PureDiskCR1.acme.com IP-out3 IP-in1

PureDiskCR2.acme.com IP-out4 IP-in2

PureDiskCTRL1.acme.com IP-out5 IP-in1

PureDiskCTRL.acme.com2 IP-out6 IP-in2

PureDiskSPA.acme.com IP-out2 IP-in1

PureDiskMBS.acme.com IP-out2 IP-in1


52 Single-port communication
Configuring single-port communication

PureDiskDebug.acme.com IP-out7 IP-in1

2 Configure the firewall to translate the IP addresses and outside ports to inside
addresses and inside ports.
Use your firewall software’s documentation to help you translate the ports.
In the example storage pool, the translations are as follows for the outside
ports:

Outside IP Outside port

IP-out2 443

IP-out3 443

IP-out4 443

IP-out5 443

IP-out6 443

IP-out7 443

In the example storage pool, the translations are as follows for the inside
ports:

Inside IP Inside port

IP-in1 443

IP-in1 10082

IP-in2 10082

IP-in1 10101

IP-in2 10101

IP-in1 10087

3 Proceed to the following:


See “Adding FQDNs to each service” on page 53.
Single-port communication 53
Configuring single-port communication

Adding FQDNs to each service


The following procedure explains how to add an FQDN to each service’s
configuration information.
To add FQDN information for each service
1 Click Settings > Topology.
2 Expand all the items in the left pane so that PureDisk displays all storage
pool services.
3 Select a service and add the FQDN information.
For example, select the first content router that appears and complete the
following steps:
■ Type the FQDN for this particular service into the Host Name (FQDN)
field.
This field accepts a host name, but to enable this feature, specify the FQDN
for this particular service.
■ Click Save.

4 Repeat step 3 for each content router, metabase engine, metabase server,
storage pool authority, and NetBackup export engine service on every node.
Do not perform these steps for the metabase engines. For the metabase
engines, PureDisk updates the FQDN information automatically when you
update the storage pool authority and the controller, respectively.
5 Proceed to the following:
See “Creating a new department with single-port settings” on page 53.

Creating a new department with single-port settings


The clients that you want to enable as single-port clients need a customized
configuration file and a department that is dedicated to single-port use. The
procedures in this topic explain how to create a new department and how to apply
client configuration file templates to this new department.
PureDisk ignores any leading or trailing spaces that you specify in a department
name.

Note: Make sure to add a new department. The single-port feature does not work
if you add a new location.

Use the procedures in this topic as follows:


54 Single-port communication
Configuring single-port communication

■ To implement the single-port feature for a new installation


OR
To implement the single-port feature for an existing installation in which the
clients you want to enable as single-port clients reside in different departments
Perform the following tasks:
■ Create a new department.
See “(Conditional) Creating a new department” on page 54.
■ Create a new configuration template.
See “Creating a new configuration template” on page 55.
■ Assign the new template to the new department.
See “Assigning the a new configuration template to a new department”
on page 55.

■ To implement the single-port feature in an existing storage pool when the


clients to be enabled as single-port clients reside in the same department,
perform the following tasks:
■ Create a new configuration template.
See “Creating a new configuration template” on page 55.
■ Assign the new template to the new department.
See “Assigning the a new configuration template to a new department”
on page 55.

(Conditional) Creating a new department


Perform the procedure in this topic under the following circumstances:
■ To implement the single-port feature for a new installation
■ To implement the single-port feature for an existing installation in which the
clients you want to enable as single-port clients reside in different departments
To create a new department
1 Click Settings > Configuration.
2 In the left pane, expand User management > Departments.
3 Select Departments.
4 In the right pane, click Add Department.
5 In the Department Name field, type in a name for this new department.
For example, assume that you want to name this department for single port
use. You can call this department ATOP Dept, which means All Through One
Port Department.
Single-port communication 55
Configuring single-port communication

6 Click Add.
7 Proceed to the following:
See “Creating a new configuration template” on page 55.

Creating a new configuration template


This procedure explains how to create a new configuration template.
To create a new configuration template
1 Click Settings > Configuration.
2 In the left pane, expand Configuration file templates > PureDisk Client
Agent.
3 Select Default ValueSet for PureDisk Client Agent.
4 In the right pane, click Copy ValueSet.
5 In the left pane, select Copy of Default ValueSet for PureDisk Client Agent.
6 In the right pane, in the ValueSet Name (new) field, type a new name.
For example, ATOP Template.
7 Click Save.
8 Proceed to the following:
See “Assigning the a new configuration template to a new department”
on page 55.

Assigning the a new configuration template to a new


department
This procedure explains how to assign a new configuration template to a new
department.
To assign the new configuration template to the new department
1 Click Settings > Configuration.
2 In the left pane, expand Configuration file templates > PureDisk Client
Agent.
3 Select the new template you created in the following procedure:
See “Creating a new configuration template” on page 55.
For example, select ATOP Template.
4 In the right pane, click Assign Template.
56 Single-port communication
Configuring single-port communication

5 Click the box to the left of the name of the new department.
For example, click to the left of ATOP Dept.
6 Click Assign.
7 Proceed to the following:
See “Specifying port number 443 as the default port in the configuration file
template” on page 56.

Specifying port number 443 as the default port in the configuration


file template
The following procedure explains the configuration sections that you need to edit
to specify that PureDisk use single port 443 for client communication. This feature
supports single-port communication only through port 443.
After you specify port 443, you can install the agent software on your clients.
When you install the agent software, specify the new department that you created
that uses the single port. These new clients assume the specifications in the new
configuration template that you assigned to the new department.
To specify port 443 as the default port
1 Click Settings > Configuration.
2 In the left pane, expand Configuration file templates > PureDisk Client
Agent.
3 Expand the new template that you created in the following procedure:
See “Creating a new configuration template” on page 55.
For example, expand ATOP Template.
4 Expand the following objects:
■ contentrouter > port
■ ctrl > tcport
■ debug > dldport

5 Under the expanded contentrouter > port, select All OS: portnumber.
6 In the Value field, type 443.
7 Click Save.
Single-port communication 57
Configuring single-port communication

8 Repeat the following steps to change the port value to 443 for the ctrl > tcport
field and the debug > dldport field:
Step 6
Step 7
9 Perform one of the following tasks:
■ (Conditional) Configure port 443 in replication policies
See “(Conditional) Configuring port 443 in replication policies” on page 57.
■ Install the agent software on the clients or move the clients
See “Installing agent software on the clients or moving clients” on page 58.

(Conditional) Configuring port 443 in replication policies


You might have upgraded to PureDisk 6.6 and have replication policies on other
storage pools. These policies may need to push data through port 443 to this
storage pool. The following procedure explains how to configure such policies.
For example, assume that you have two storage pools. Their names are Remote
and Central. You have upgraded both storage pools to PureDisk 6.6, and you
configured Central to use single-port communication. If you replicate data from
Remote to Central, you need to log into Remote and perform the following
procedure.
To configure replication policies to send data to port 443
1 Log into the storage pool authority Web UI on the storage pool that replicates
data.
For example, if you replicate data from storage pool Remote to storage pool
Central, log into storage pool Remote.

2 Click Manage > Policies.


3 In the left pane, expand the tree to expose the replication policies that send
data to the other storage pool.
4 Complete the following steps to edit a replication policy:
■ Select a replication policy.
■ Click the Parameters tab.
■ Type 443 into the Communication Port field.
■ Click Save.
58 Single-port communication
Configuring single-port communication

5 Repeat step 3 for each replication policy.


6 Proceed to the following:
See “Installing agent software on the clients or moving clients” on page 58.

Installing agent software on the clients or moving clients


Perform one of the following procedures to assign clients to the department that
uses single ports. The procedure to use depends on whether you performed an
initial installation or an upgrade.

Installing agent software for an initial installation


Perform the following procedure to enable single-port communication after an
initial installation.
To install the agent software and specify the name of the new department
◆ Use the procedures in the following manual to install the PureDisk agent
software on each new client:
See the PureDisk Client Installation Guide.
When the software prompts you for a department name, specify the name of
the new department you created that uses the single port.

Moving clients to the new department for an upgrade


Perform the following procedure to enable single-port communication after an
upgrade.
To move clients to the new department
1 Click Manage > Agent.
2 In the left pane, expand the tree to expose the clients that you need to move.
3 Complete the following steps to edit each client and add the new department:
■ Select a client.
■ In the right pane, in the Properties: Agent panel, use the Department
drop-down list to select the department you created for single-port
communication.
■ Click Save.

4 Repeat step 3 for each client that you need to move to the new department.
Chapter 3
Data replication
This chapter includes the following topics:

■ About data replication

■ About data replication and PureDisk release levels

■ About data replication policies

■ Creating or editing a data replication policy

■ Replication jobs

■ Copying and deleting a replication policy

■ Managing replicated data selections

■ Tuning replication

About data replication


When PureDisk runs a data replication policy, it copies the backed up PureDisk
data selections from one storage pool to another storage pool. The data replication
process copies file data and file metadata. First, PureDisk takes a snapshot of the
data selection. Next, it copies the data within the snapshot to a different data
selection in a different storage pool.
For example, you can create a policy to replicate backed-up data selections from
a small storage pool in a small office's location to a large storage pool in a central
location. If you have a work group in a small office that is connected to your
headquarters over a relatively slow WAN, you can replicate data from that small
office to the headquarters location if you have a storage pool in both locations.
The PureDisk storage pool in the small office is an all-in-one storage pool; all of
the PureDisk services are installed on a single node. Your headquarters has other,
larger storage pools. For safety and security reasons, a copy of all data must be
60 Data replication
About data replication and PureDisk release levels

available at headquarters at all times. You can create a replication policy to copy
data from the remote storage pool to the headquarters storage pool at regular
intervals, such as nightly.
The data replication process does not copy system data such as data selections
or policies that you configured. You can preserve this system data through storage
pool authority replication. For more information about storage pool authority
replication, see the following:
See “About storage pool authority replication (SPAR)” on page 199.
You can run a replication policy at any time. However, if the policy includes any
data selections that PureDisk has not yet backed up, the replication policy does
not replicate those data selections. A replication policy copies only backed up data
selections.
Additionally, replication jobs and content router rerouting jobs cannot run
simultaneously. If you start a replication job and then start a rerouting job,
PureDisk stops the replication job.

Note: If you experience replication job performance degradation and you have a
high-latency communication network between the two storage pools, you can
possibly improve performance by changing some default TCP/IP settings. For
more information, see "About changing TCP/IP settings to improve replication
job performance" in the PureDisk Best Practices Guide, Chapter 5: Tuning PureDisk.

The following contain more information about replication:


■ See “About data replication and PureDisk release levels” on page 60.
■ See “About data replication policies” on page 61.
■ See “Creating or editing a data replication policy” on page 62.
■ See “Replication jobs” on page 68.
■ See “Copying and deleting a replication policy” on page 68.
■ See “Managing replicated data selections” on page 68.
■ See “Tuning replication” on page 71.

About data replication and PureDisk release levels


Examine the release level of each storage pool before you implement replication.
To verify the PureDisk version of a storage pool, click About in the Web UI.
The rules for replication and release levels are as follows:
Data replication 61
About data replication policies

■ You can replicate between storage pools if each storage pool is at the same
release level.
■ You can replicate between storage pools when the destination storage pool is
at a release level that is higher than the source storage pool. However,
Symantec recommends that you install all your storage pools with the same
PureDisk release.
For example, Symantec supports replication between a source storage pool at
the PureDisk 6.5.x release level and a destination storage pool at the PureDisk
6.6 release level. Symantec does not guarantee data integrity when you replicate
between storage pools with other nonidentical release levels.
■ You cannot replicate between storage pools when the destination storage pool
is at a release level that is lower than the source storage pool.

About data replication policies


Typically, you create data replication policies only after some backups have
completed. Replication policies apply to a source storage pool. You can specify
the destination storage pool, but you cannot specify the name for the replicated
data selection(s) on the destination storage pool.
For more information about how to create data replication polices, see the
following:
See “Creating or editing a data replication policy” on page 62.
When you run a replication policy for the first time, PureDisk creates data
selections on the destination storage pool. You can create and run an additional
replication policy that includes the data selections from the first policy. If you
run additional policies, PureDisk creates the additional data selections on the
destination storage pool.
For example, assume that a source storage pool runs replication policy AAA. Policy
AAA replicates data selections 111, 222, and 333 to a destination storage pool.
PureDisk creates three data selections for the replication. Assume that you later
create replication policy BBB to replicate data selection 111 again. PureDisk creates
an additional data selection for data selection 111.
PureDisk keeps track of the data that has been replicated. Consequently, when
you run a specific replication policy again, PureDisk forwards only the new data
to the destination storage pool. Specifically, the new data is data that was added
to the data selection(s) since the previous policy run.
Any clients that own the source data selections are also replicated to the
destination storage pool. PureDisk displays the replicated clients and data
62 Data replication
Creating or editing a data replication policy

selections with a special icon when you click Manage > Agent on the destination
storage pool’s Web UI.

Note: You can delete data from a data selection when you run a removal policy.
PureDisk does not remove this data automatically from a replicated data selection.
PureDisk does not replicate delete actions. If you want to keep the source and the
replicated data selection identical, define similar removal policies for both data
selections.

After you create and run a policy to replicate data from a source storage pool to
a destination storage pool, you can do the following:
■ View the replicated data on the destination storage pool.
For more information about how to view replicated data, see the following:
See “Replication jobs” on page 68.
■ Use restore functions to copy the replicated data to a client that is attached
to the destination storage pool.
For more information about how to copy replicated data, see the following:
See “Copying replicated data to clients on the destination storage pool”
on page 70.
■ Restore the replicated data back to the original client or another client on the
source storage pool.
For more information about how to restore replicated data, see the following:
See “Restoring replicated data back to clients on the source storage pool”
on page 70.

Creating or editing a data replication policy


The following procedure explains how to create a new data replication policy and
how to edit an existing data replication policy.
To create or edit a replication policy
1 Select Manage > Policies.
2 In the left pane, under Data Management Policies, click Replication.
3 Complete one of the following steps:
■ To create a policy, in the right pane click Create Policy.
■ To edit a policy, expand Replication and click a policy name.

4 Complete the General tab.


See “Completing the General tab for a Replication policy” on page 63.
Data replication 63
Creating or editing a data replication policy

5 Complete the Data Selections tab.


See “Completing the Data Selections tab for a Replication policy” on page 64.
6 Complete the Scheduling tab.
See “Completing the Scheduling tab for a Replication policy” on page 66.
7 Complete the Parameters tab.
See “Completing the Parameters tab for a Replication policy” on page 66.
8 Click Add.

Completing the General tab for a Replication policy


The following procedure explains how to complete this tab.
To complete the General tab
1 Type a name for this policy in the Name field.
2 Select Enabled or Disabled.
This setting lets you control whether PureDisk runs the policy according to
the schedule you specify in the Scheduling tab, as in the following situations:
■ If you select Enabled, PureDisk runs the policy according to the schedule.
■ If you select Disabled, PureDisk does not run the policy according to the
schedule. Disabled is the default.
For example, you might want to stop running this policy during a system
maintenance period. If you select Disabled, you do not need to enter
64 Data replication
Creating or editing a data replication policy

information in the Scheduling tab to suspend and then reenable this


policy.

3 (Optional) Specify an escalation action.


PureDisk can notify you if replication does not complete within a specified
time. For example, you can configure PureDisk to send an email message to
an individual if a replication policy workflow does not complete in an hour.
Select times in the Escalate warning after or the Escalate error and terminate
after drop-down boxes to specify the elapsed time. If the policy workflow
does not complete before the warning timeout, PureDisk generates a warning
event. If the policy workflow does not complete before the error timeout,
PureDisk generates an error event.
If you select either option, you must create a policy and an event escalation
action. These escalation actions define the email message, define its recipients,
and associate the escalation action with the policy.
For more information about policy and event escalations, see the following:
See the PureDisk Backup Operator’s Guide.

Completing the Data Selections tab for a Replication policy


The Data Selections tab displays a tree view. From this view, you can select one
or more data selections to include in the policy.
From this tab, you can also specify filters for PureDisk to apply to the data
selections you choose. You can filter on the data selection name or the description.
You also can specify a data selection template as the filter. If you use a filter,
PureDisk applies the filter when the policy runs.
To specify data selections
1 Click Data Selection.
2 Select a data selection type from the Data Selection Type drop-down list.
Data replication 65
Creating or editing a data replication policy

3 Expand the tree and select one or more data selections.


To make a selection, click in the box to the left of the label.
To select a number of data selections, select a storage pool or department.
This action selects all data selections that exist under the storage pool or
department.
A data selection is associated with a client. PureDisk applies the policy to all
selected data selections and the associated clients when you save the policy.
If a replication policy includes any data selections that belong to an inactive
client, PureDisk replicates that inactive client’s data selections.
If you do not select a data selection in the tree, PureDisk uses all data
selections in the storage pool.
4 Decide whether you want to include all the data selections you checked in
step 3.
Proceed as follows:
■ If you want to include all the data selections in the box, make sure that
Include all data selections selected above is selected. Proceed to See
“Completing the Scheduling tab for a Replication policy” on page 66.
■ If you want to exclude, or filter out some data selections, select Apply all
inclusion rules below to dataselections selected above. Proceed to step
5.

5 (Optional) Specify one of the following filter methods:


■ Fill in the Data selection name or Data selection description fields. Use
wildcard characters (* and ?) to filter the data selections. You can filter
on the data selection name field or on the data selection description field.
For example, assume that the following data selection names exist and
that they were selected under a department in the tree:

U_*.jpg_files
W_*.jpg_files
W_*.xls_files

If you type *files in the Data selection name field, the policy backs up
all three data selections.
If you type W* in the field, the data selections for this policy include only
the data selections that are named W_*.jpg_files and W_*.xls_files.
For more information on filtering, see the following:
See the PureDisk Backup Operator’s Guide.
66 Data replication
Creating or editing a data replication policy

■ Select a template from the Data selections based on template drop-down


list to specify an existing data selection template to use. PureDisk uses
all data selections that it linked to the template. This action assumes that
the template was previously applied to the client.

Completing the Scheduling tab for a Replication policy


Create a schedule that lets you save your key data items after any regular company
processes have updated the data.
For optimal performance, schedule removal policies for replicated data selections
on the destination storage pool to run at different times as the replication policy
on the source storage pool.
To specify the schedule
1 Click Scheduling.
2 Select hourly, daily, weekly, or monthly to specify how frequently you want
this policy to run.
3 Select schedule details.
The detail options that PureDisk displays depend on whether you selected
an hourly, daily, weekly, or monthly schedule in the preceding step.

Completing the Parameters tab for a Replication policy


The following procedure explains how to complete the Parameters tab.
To complete the Parameters tab
1 Click Parameters.
2 In the IP of the remote SPA field, type the IP address or the FQDN for the
destination (remote) storage pool.
Symantec recommends that you specify an FQDN.
3 In the Login remote SPA field, type the login ID for the destination (remote)
storage pool.
4 In the Password remote SPA field, type the password for the destination
(remote) storage pool.
Data replication 67
Creating or editing a data replication policy

5 (Optional) In the Communication Port field, type the open communication


port of the storage pool that is used for replication.
By default, this port is 10082. If you enabled single port communication,
specify port number 443.
For more information about single ports, see the following:
See the PureDisk Storage Pool Installation Guide.
6 In the Type of Replication field, select one of the following:
■ Select Normal to include images of any new files or changed files since
the full replication. Typically, you can select Normal.
■ Select Reverify all to provide a complete copy of all the files that were
specified in the data selection. Select Reverify all if you suspect that a
problem exists on your destination storage pool. For example, content
router corruption or a complete storage pool crash.

7 (Optional) Type a bandwidth size to use when files are transferred.


If you leave this field blank, PureDisk by default uses the maximum bandwidth
for the network.
This field lets you specify the maximum network bandwidth to use during
the backup. For example, if you specify 50 Kbps, the agent uploads data at
that rate to the content routers. This number is an average. The peak
bandwidth usage can be higher than the number specified, but the average
bandwidth usage cannot.
8 Check or clear the Force encryption box.
When you check this option, PureDisk encrypts and compresses the data over
the wire during transmission between the source storage pool and the
destination storage pool.
If you want the data to arrive at the destination storage pool in an encrypted
state, specify encryption in the backup policy on the source storage pool.
9 (Optional) In the Define backup window field, select a start time and end
time for the backup window from the drop-downs.
PureDisk queues the job at the time you specify in the Scheduling tab. Other
jobs in the queue might prevent PureDisk from running the job immediately.
When you specify a start time and an end time, you ensure that the job does
not start outside of this period. In addition, PureDisk stops the job if the job
does not end before the end time that you specify.
Make sure that the start time you specify falls within this replication window.
68 Data replication
Replication jobs

10 (Optional) Click Set as defaults.


If you select this option, PureDisk uses these settings for all replication policies
that you create later.
11 Click Add to save the policy.

Replication jobs
PureDisk runs replication jobs on the source storage pool.
PureDisk creates virtual agents on the destination (remote) storage pool when
you implement replication. PureDisk runs the following types of jobs on virtual
agents:
■ Imports of forwarded data during replication
■ Data removal
■ Maintenance

Copying and deleting a replication policy


The following procedure explains how to copy or delete a replication policy.
To copy or delete a replication policy
1 Select Manage > Policies.
2 In the left pane, under Data Management Policies, expand Replication.
To expand Replication, click the plus sign (+) to the left of the Replication
label.
3 Select the replication policy that you want to copy or delete.
4 In the right pane, click one of the following:
■ Copy Policy
■ Delete Policy

Managing replicated data selections


The following information pertains to managing replicated data selections:
■ See “Viewing replicated data” on page 69.
■ See “Working with replicated agents and data selections” on page 69.
Data replication 69
Managing replicated data selections

■ See “Copying replicated data to clients on the destination storage pool”


on page 70.
■ See “Restoring replicated data back to clients on the source storage pool”
on page 70.
■ See “Restoring replicated Oracle data” on page 71.

Viewing replicated data


After you run a replication policy, the replicated client and its data selections
appear in the destination (remote) storage pool's Web UI
The destination storage pool displays both information about its own clients and
information about replicated data from other storage pools. PureDisk differentiates
replicated client names and replicated data selection names. In the destination
storage pool’s Web UI, an [R] icon to the left of the client name indicates a
replicated data selection. PureDisk uses this icon on the destination storage pool
for replicated clients and replicated client data selections.
The following procedure explains how to view replicated data selections.
To view replicated data
1 Log on to the Web UI of the destination storage pool.
2 Click Manage > Agent.
3 In the left pane, select the replicated client.
The icon for a replicated client includes [R] and the name of the client. When
click a client name, client information appears in the right pane.

Working with replicated agents and data selections


The first time PureDisk replicates a data selection, the replicated data selection
always appears in the Unknown location and in the Unknown department.
The replicated data selection appears this way regardless of the location and
department that hosted the data selection on the source storage pool. You can
move the data selection to a different location and department after this first
replication.
Never rename the Unknown location or the Unknown department. PureDisk
looks for these containers when it replicates data selections. PureDisk puts the
first of all replicated data selections in these containers.
70 Data replication
Managing replicated data selections

Copying replicated data to clients on the destination storage pool


The following procedure explains how to copy replicated data.
To copy replicated data to clients on the destination storage pool
1 Log on to the Web UI of the destination storage pool.
2 Click Manage > Agent.
3 In the left pane, select the replicated client.
The icon for a replicated client includes [R] and the name of the client. When
click a client name, client information appears in the right pane.
4 Use the PureDisk restore functions to copy the data.
For more information about how to restore data, see the following:
See the PureDisk Backup Operator’s Guide.

Restoring replicated data back to clients on the source storage pool


You can restore replicated data only to clients that are currently connected to the
destination storage pool.
You can replicate an MS Exchange, MS SQL, Oracle, Oracle UDJ, or System State
and Services data selection to another storage pool. To restore data, treat the
client to which you want to restore the data as if it were an alternate client. For
more information about alternate client restores of application data, see the
PureDisk Backup Operator’s Guide.
The following procedure assumes that the original client system can no longer
be used for some reason or the original data is corrupted.
To restore replicated data back to clients on the source storage pool
1 Install PureDisk on the client where you want to restore data.
See the PureDisk Client Installation Guide.
2 Log on to the Web UI of the destination storage pool.
This is the storage pool that received the replicated data from the source
storage pool.
3 Click Manage > Agent.
Data replication 71
Tuning replication

4 In the left pane, select the replicated client.


The icon for a replicated client includes [R] and the name of the client. When
click a client name, client information appears in the right pane.
5 In the right pane, select Restore Files.
When you restore a data selection, select the new client of the source storage
pool as the destination client.
See the PureDisk Backup Operator's Guide for information about how to
perform the restore.

Restoring replicated Oracle data


You need to perform some additional configuration steps if you want to restore
Oracle data that you replicated to another storage pool. For more information
about these configuration steps, see the PureDisk Backup Operator's Guide.

Tuning replication
PureDisk includes configuration parameters that you can manipulate in order to
tune replication performance. For information about tuning, see the following:
See “Tuning replication performance” on page 337.
72 Data replication
Tuning replication
Chapter 4
Exporting data to
NetBackup
This chapter includes the following topics:

■ About exporting data to NetBackup

■ Configuring PureDisk and NetBackup for export capability

■ Creating or editing an export to NetBackup policy

■ Running an export to NetBackup policy

■ Performing a point-in-time export to NetBackup

■ Troubleshooting export job failures

■ Copying or deleting an export to NetBackup policy

■ Restoring from NetBackup

■ Restoring to a PureDisk client that is not a NetBackup client

■ Restoring to a PureDisk client that is also a NetBackup client

About exporting data to NetBackup


The NetBackup export engine lets you back up your PureDisk files to NetBackup
tape or disk. The export engine lets you export a backed up Files and Folders data
selection from a PureDisk content router to NetBackup. NetBackup then catalogs
the data and writes it to tape or disk in NetBackup’s file format. You can use these
files for long-term data protection. If you ever delete the files from the original
client or from PureDisk storage, you can restore them from NetBackup.
74 Exporting data to NetBackup
About exporting data to NetBackup

After you export the PureDisk files to NetBackup, you can treat these files as if
they were native NetBackup files. From the NetBackup administration console,
you can generate NetBackup reports, browse the files, and manage the files.
To restore the data that you exported to NetBackup, use the NetBackup procedures
that are described in the NetBackup administration guides.
The following provide an overview of the NetBackup export engine:
■ See “Export limitations” on page 74.
■ See “Requirements for exporting data to NetBackup” on page 74.
■ See “Requirements for restoring data from NetBackup” on page 75.
■ See “Enabling and using the NetBackup export engine” on page 75.

Export limitations
The PureDisk NetBackup export engine lets you export backed up PureDisk Files
and Folders data selections to NetBackup. The NetBackup export engine does not
export other PureDisk data selection types.
When you choose data selections for export, a tree structure appears in the Web
UI , and you make your selection from the tree. Be aware that PureDisk exports
only the Files and Folders data selections in the tree. For example, if you select a
storage pool that has many types of data selections, PureDisk exports only the
Files and Folders data selections. In addition, if you choose to export replicated
data from a target storage pool, PureDisk exports only the replicated Files and
Folders data selections.

Requirements for exporting data to NetBackup


The ability to export Files and Folders data selections from PureDisk to NetBackup
requires the following NetBackup software and licenses:
■ NetBackup 6.0 MP5 or later
■ NetBackup client
■ A NetBackup DataStore license and NetBackup client license
When you export PureDisk data to NetBackup, you use a NetBackup DataStore
policy. This feature requires that you install a NetBackup DataStore license
on the NetBackup server. The PureDisk license key includes a NetBackup
DataStore license.
Contact your Symantec sales representative to obtain the required software and
license.
Exporting data to NetBackup 75
Configuring PureDisk and NetBackup for export capability

Requirements for restoring data from NetBackup


The exported Files and Folders data selections become NetBackup files after you
export them. You can use NetBackup restore methods to restore the files. The
host to which you restore the data selections must run NetBackup release 6.0 MP5
or later.
When you restore the PureDisk data, you perform the restore from NetBackup,
and you restore to NetBackup. If necessary, you can use a network transfer method
to move the data to the correct PureDisk client. If you want to put the files back
under PureDisk control again, use PureDisk to back them up.

Enabling and using the NetBackup export engine


The following explain how to enable and use the NetBackup export engine:
■ See “Configuring PureDisk and NetBackup for export capability” on page 75.
■ See “Creating or editing an export to NetBackup policy” on page 85.
■ See “Running an export to NetBackup policy” on page 91.
■ See “Performing a point-in-time export to NetBackup” on page 91.
■ See “Troubleshooting export job failures” on page 92.
■ See “Copying or deleting an export to NetBackup policy” on page 94.
■ See “Restoring from NetBackup” on page 94.
■ See “Restoring to a PureDisk client that is not a NetBackup client” on page 96.
■ See “Restoring to a PureDisk client that is also a NetBackup client” on page 97.

Configuring PureDisk and NetBackup for export


capability
To enable the PureDisk export capability, you must configure both of the following
components on the same PureDisk node:
■ The PureDisk NetBackup export engine.
To configure this engine, install the PureDisk nbu service on this node. When
you install the nbu service, you enable this node to export data from this
PureDisk storage pool to a NetBackup environment. You can install the nbu
service at initial installation time, or you can add it later.
■ The NetBackup client software.
These clients are distributed with NetBackup.
76 Exporting data to NetBackup
Configuring PureDisk and NetBackup for export capability

You can configure the required software on its own dedicated node, or you can
configure this software on a node with other PureDisk services.
Figure 4-1 shows the software that you need to configure to enable PureDisk
exports to NetBackup.
Exporting data to NetBackup 77
Configuring PureDisk and NetBackup for export capability

Figure 4-1 Example environment for exporting PureDisk data to NetBackup

4 5

The figure shows a multinode PureDisk environment attached to a NetBackup


environment, as follows:
78 Exporting data to NetBackup
Configuring PureDisk and NetBackup for export capability

Label Object

1 PureDisk storage pool

2 NetBackup environment

3 PureDisk node_1, which hosts the following services:

■ Storage pool authority


■ Metabase server
■ Metabase engine
■ NetBackup export engine
■ NetBackup client

4 PureDisk node_2, which hosts a content router service

5 PureDisk node_3, which hosts the following services:

■ NetBackup export engine


■ NetBackup client

6 PureDisk client kwiek.

7 PureDisk client speedy.

Both node_1 and node_3 host a PureDisk NetBackup export engine and the
NetBackup client software. In this storage pool, node_3 can be a low-end computer
because it only serves to transfer data. If you had an all-in-one PureDisk
environment, you would have to install the NetBackup client on that one node.
The figure shows two clients: kwiek and speedy. The NetBackup export engine
on node_1 exports data from kwiek. The NetBackup export engine on node_3
exports data from speedy.
To perform a direct restore of files from NetBackup to speedy, install the
NetBackup client software on speedy. Configure the PureDisk environment first,
and then configure NetBackup.
To configure PureDisk and NetBackup to export PureDisk data selections
◆ Complete the following procedures:
■ See “Configuring NetBackup to receive data exported from PureDisk ”
on page 79.
■ See “Configuring PureDisk to export data to NetBackup” on page 84.
Exporting data to NetBackup 79
Configuring PureDisk and NetBackup for export capability

Configuring NetBackup to receive data exported from PureDisk


The procedure in this topic explains how to configure NetBackup to accept
PureDisk data.
To configure NetBackup for PureDisk export capability
1 Install the NetBackup Linux SUSE 2.6 client on each PureDisk node.
Both the PureDisk NetBackup export engine service and the NetBackup client
need to be running together on the same PureDisk node or nodes.
When you install a NetBackup Linux client, the following message might
appear:

No [x]inetd process found.

Ignore this message. A later step in this procedure starts the xinetd daemon.
For information about how to install the NetBackup client, see the following:
See the NetBackup Installation Guide for UNIX and Linux.
2 (Conditional) For each PureDisk node, create a file for the host FQDN and
another file for the service FQDN in the altnames directory on the NetBackup
master server.
Perform this step if the storage pool you want to back up is clustered.
This step is needed because the bp.conf file on each node contains the
physical host address. However, the backup process and the restore process
use the service address.
If necessary, create the altnames directory itself. Within the altnames
directory, use the touch(1) command to create a file for each node's host
FQDN and each node's service FQDN.
Example 1. To create the altnames directory on a UNIX master server, type
the following command:

# mkdir /usr/openv/netbackup/db/altnames

Example 2. Assume that you want to create file names in the altnames
directory of a UNIX NetBackup master server for the nodes in the following
two-node cluster:
■ Node 1 = allinone.acme.com (host FQDN) and allinones.acme.com
(service FQDN)
■ Node 2 = passive.acme.com (host FQDN) and passives.acme.com (service
FQDN)
80 Exporting data to NetBackup
Configuring PureDisk and NetBackup for export capability

For the all-in-one node (node 1), type the following commands on the master
server to create the correct files in the altnames directory:

# touch allinone.acme.com
# touch allinones.acme.com

For the passive node (node 2), type the following command on the master
server to create the correct files in the altnames directory:

# touch passive.acme.com
# touch passives.acme.com

For more information about the altnames directory and creating files inside
the altnames directory, see the following:
See the NetBackup Administrator’s Guide, Volume I.
3 Determine if NetBackup access control (NBAC) is enabled in your NetBackup
environment.
One way to tell if NBAC is enabled is to examine the bp.conf file. If the
USE_VXSS = AUTOMATIC or USE_VXSS = REQUIRED appear, then NBAC is
enabled.
■ If NBAC is enabled, proceed to step 4.
For more information about NBAC, see the NetBackup Security and
Encryption Guide.
■ If NBAC is not enabled, proceed to step 6.

4 (Conditional) On the NetBackup master server, configure NBAC for the


PureDisk nodes.
Perform this step if NBAC is enabled in your NetBackup environment.
Log into the NetBackup master server and complete the following steps:
■ Change to the directory where the bpnbat command resides.
On UNIX master servers, the bpnbat command resides in
/usr/openv/netbackup/bin. On Windows master servers, the bpnbat
command resides in install_path\NetBackup\bin.
■ Type the following command to add the PureDisk node to NBAC:

bpnbat -addmachine
Exporting data to NetBackup 81
Configuring PureDisk and NetBackup for export capability

The bpnbat command prompts you for the machine name, prompts you
to create an NBAC password, and prompts you to confirm the password.
For the machine name, type the FQDN of the PureDisk node.
■ Change to the directory where the bpnbaz command resides.
On UNIX master servers, the bpnbaz command resides in
/usr/openv/netbackup/bin/admincmd. On Windows master servers, the
bpnbaz command resides in install_path\NetBackup\bin\admincmd.

■ Type the following commands to enable the PureDisk node to perform


access checks:

bpnbaz -allowauthorization pd_node_fqdn


Operation completed successfully.

■ Repeat the preceding bulleted steps for each PureDisk node that hosts a
NetBackup client.
For example, type the following commands on a UNIX master server:

masterserver# cd /usr/openv/netbackup/bin
masterserver# bpnbat -addmachine
Machine Name: potato.idaho.com
Password: *****
Password: *****
Operation completed successfully.
masterserver# cd admincmd
masterserver# bpnbaz -allowauthorization potato.idaho.com
Operation completed successfully.

5 (Conditional) On the PureDisk node that hosts the NetBackup client software,
install VxAT.
Perform this step if NBAC is enabled in your NetBackup environment.
■ Log into the PureDisk node as root.
■ Type the following commands to create NBAC credentials for the node:

puredisknode# cd /usr/openv/netbackup/bin/admincmd
puredisknode# bpnbat -loginmachine

■ Respond to the bpnbat command's prompts, which are as follows:


■ Does this machine use Dymanic Host Configuration Protocol
(DCHP)? (y/n)?
82 Exporting data to NetBackup
Configuring PureDisk and NetBackup for export capability

Type n, and press Enter.


■ Authentication Broker:
Specify the host name of the NetBackup authentication broker, and
press Enter.
■ Authentication port [Enter = default]:
Press Enter to use the default authentication port. If your NetBackup
environment uses a site-specific authentication port, type that port
number.
■ Machine name:
Type the FQDN of the PureDisk node, and press Enter.
■ Password:
Type the password you created in step 4, and press Enter.

■ Edit file bp.conf as follows:


■ Add entries for the NetBackup master server and the NetBackup media
server.
■ Set the USE_VXSS = and (optionally) the AUTHENTICATION_DOMAIN =
parameters as dictated by your site practices. You can set these
parameters through the NetBackup administrator interface or by using
commands.
See your NetBackup administrator documentation for more information
about how to edit the bp.conf file.
■ Repeat the preceding bulleted steps for each node that hosts a NetBackup
client.
For example:

puredisknode# cd /usr/openv/NetBackup/bin
puredisknode# bpnbat -loginmachine
Does this machine use Dynamic Host Configuration Protocol (DCHP)? (y/n)? n
Authentication Broker: colonel.flagg.com
Authentication port [Enter = default]:
Machine Name: potato.idaho.com
Password: *****
Operation completed successfully.
Exporting data to NetBackup 83
Configuring PureDisk and NetBackup for export capability

6 On each PureDisk node that you want to configure, make sure that the xinetd
daemon is running.
Enter the following command to determine if xinetd is running:

# ps -aef |grep xinetd

If it is not running, enter the following command:

# /etc/init.d/xinetd start

To ensure that the xinetd daemon starts after you restart the system, type
the following command:

# chkconfig xinetd on

7 From a NetBackup administration console, create a DataStore policy with an


Application Backup Schedule.
A DataStore policy is a specific policy type. An Application Backup Schedule
is the default schedule. For information about how to create this policy, see
the NetBackup System Administrator’s Guide, Volume I.
The following are some guidelines for this policy:
■ When you create this policy, make sure to specify a schedule for 24 hours
a day and 7 days a week. This schedule leaves the policy open and available
for whenever PureDisk needs to send data.
■ Remember the name of this policy. You need to specify this name when
you create the PureDisk Export to NetBackup policy.
■ If you enable encryption, NetBackup asks you for a passphrase. NetBackup
uses this passphrase for all data that you export from the client to
NetBackup.
Ensure that you use the same passphrase for the following:
■ The NetBackup export engine
■ The client upon which the source files reside
■ All clients to which you might want to restore the exported data.

■ If you want to export data to NetBackup from a clustered PureDisk storage


pool, specify the nodes upon which the NetBackup export engine service
resides. In NetBackup, specify the fully qualified domain names (FQDNs)
84 Exporting data to NetBackup
Configuring PureDisk and NetBackup for export capability

of the physical host for these nodes. These are the FQDNs of the physical
nodes. Do not specify the service FQDNs.

Configuring PureDisk to export data to NetBackup


The following procedure explains how to configure PureDisk to export data
selections to NetBackup.
To configure a PureDisk environment to export data to NetBackup
1 Make sure that the NetBackup export engine service is included in the storage
pool configuration, is installed on the node that you want to designate as the
NetBackup export engine, and is activated.
You can specify the nbu as a service on more than one node.
For information about how to add a service, see the following:
■ See “About adding services” on page 280.
■ See “Adding a service to a node” on page 282.
For information about how to activate the NetBackup export engine service,
see the following:
See “Troubleshooting export job failures” on page 92.
2 Use the procedure in the PureDisk Client Installation Guide to install and
configure PureDisk on clients.
You can configure clients with any number of features. However, you cannot
enable a data lock password on a client if you want to export that client’s data
to NetBackup. If necessary, you can enable the data lock password for that
client at install time. When the time comes to export data, you need to disable
the password.
Also note that client naming conventions differ between NetBackup and
PureDisk. If the names of your PureDisk clients do not conform to NetBackup’s
client naming conventions, PureDisk transforms the client name to a
compatible name. PureDisk changes the name internally, and you can see the
result in log files and messages. In addition, you can check the names in the
following file:

/Storage/var/NbuExportClientNameChanges.txt

By convention, NetBackup client names can include only the following


characters:
■ The uppercase English language alphabetic characters, A through Z
■ The lowercase English language alphabetic characters, a through z
Exporting data to NetBackup 85
Creating or editing an export to NetBackup policy

■ The digits, 0 through 9


■ A period (.)
■ A hyphen (-)
■ An underscore (_)
Example 1: Assume that you have a PureDisk client named as follows:

my agent name is strider

PureDisk transforms this name to the following:

my_agent_name_is_strider

Example 2: Assume that you have two clients with the following names:

my agent name is strider

my agent_name*is strider

To avoid duplication, PureDisk adds a counter to the end of the second name
it encounters and transforms the names as follows:

my_agent_name_is_strider

my_agent_name_is_strider_2

3 Make sure that one or more data selections that you want to export have been
created and backed up.
If no backed up data selections exist in the storage pool, use the instructions
in the PureDisk Backup Operator's Guide to create one or more data selections
and back them up to the storage pool.
4 Create a PureDisk policy to export data to NetBackup.
For information about how to create an export to NetBackup policy, see the
following:
See “Creating or editing an export to NetBackup policy” on page 85.

Creating or editing an export to NetBackup policy


The export process copies the data selections to NetBackup, but it leaves the data
on the content routers intact. You can have multiple export policies, but because
the export is a single stream, only one export can occur at a time.
The following paragraphs explain how PureDisk manages export jobs:
86 Exporting data to NetBackup
Creating or editing an export to NetBackup policy

■ You can create multiple PureDisk export policies for a single NetBackup export
engine. PureDisk runs one export job per export policy at a time.
■ If you have two or more PureDisk export policies, these policies can send data
to the same NetBackup DataStore policy. However, you are not limited to only
one NetBackup DataStore policy. You can have multiple NetBackup DataStore
policies.
■ PureDisk can run multiple export jobs simultaneously from multiple NetBackup
export engines if the data originated on two or more PureDisk clients. However,
if the export jobs work with data that originated from a single PureDisk client,
PureDisk runs the jobs one at a time.
Use the following procedure to create a PureDisk policy that can export Files and
Folders data selections to NetBackup.
To create an Export to NetBackup policy
1 Click Manage > Policies.
2 In the left pane, under Data Management Policies, click Export to NetBackup.
3 Complete one of the following steps:
■ To create a policy, in the right pane click Create Policy.
■ To edit a policy, expand Export to NetBackup and click a policy name.

4 Complete the General tab.


See “Completing the General tab for an Export to NetBackup policy”
on page 87.
5 Complete the Data Selections tab.
See “Completing the Data Selections tab for an Export to NetBackup policy”
on page 88.
6 Complete the Scheduling tab.
See “Completing the Scheduling tab for an Export to NetBackup policy”
on page 89.
7 Complete the Parameters tab.
See “Completing the Parameters tab for an Export to NetBackup policy”
on page 89.
8 Complete the Metadata tab.
See “(Optional) Completing the Metadata tab for an Export to NetBackup
policy” on page 89.
9 Click Add when done.
Exporting data to NetBackup 87
Creating or editing an export to NetBackup policy

Completing the General tab for an Export to NetBackup policy


Use the following procedure to complete the General tab.
To complete the General tab
1 Type a name for this policy in the Policy Name field.
2 Select Enabled or Disabled.
This setting lets you control whether PureDisk runs the policy according to
the schedule that you specify in the Scheduling tab. The following situations
illustrate how the settings might be used:
■ If you select Enabled, PureDisk runs the policy according to the schedule.
This selection is the default.
■ If you select Disabled, PureDisk does not run the policy according to the
schedule.
One example of how to use Disabled is when you want to prevent this
policy from running during a system maintenance period. However, you
do not want to enter information in the Scheduling tab to suspend and
reenable this policy.

3 (Optional) Specify escalation times and actions.


PureDisk can notify you if a backup does not complete within a specified time.
For example, you can configure PureDisk to send an email message to an
individual if a backup policy workflow does not complete in an hour.
Select times in the Escalate warning after or the Escalate error and terminate
after drop-down boxes to specify the elapsed time. If the policy workflow
does not complete before the warning timeout (6 hours is the default value),
PureDisk generates a warning message. If the policy workflow does not
complete before the error timeout, PureDisk generates an error message and
terminates the job. The default value for the error timeout is five days.
Do not set the time values low if the backup policy pertains to agents that
have to back up a large number of files. For example, on a system with 1.5
million files, the scan step of a backup job typically takes 6 hours to complete.
The amounts of time required needed for the other steps are more difficult
to predict. They depend on the actual number of files that need to be uploaded.
In addition to selecting escalation times, you need to create a policy escalation
action and an event escalation action. These escalation actions define the
email message, define its recipients, and associate the escalation action with
the policy.
For information about policy escalation actions and event escalation actions,
see the PureDisk Backup Operator’s Guide.
88 Exporting data to NetBackup
Creating or editing an export to NetBackup policy

Completing the Data Selections tab for an Export to NetBackup policy


If you specify more than one data selection, PureDisk exports each Files and
Folders data selection as a separate job. Use the following procedure to complete
this tab.
To complete the Data Selections tab
1 Expand the tree and select one or more data selections.
PureDisk displays the Files and Folders data selections. To specify a number
of data selections, select a storage pool or department. This action selects all
data selections that exist under the storage pool or department.
PureDisk associates a data selection with a client. When you save the policy,
PureDisk applies it to all specified data selections and the associated clients.
If you do not select at least one Files and Folders data selection, PureDisk
backs up all data selections in the storage pool.
If you do not select at least one item in the tree, PureDisk does not save the
filter.
2 Select Include all data selections selected above.
3 (Optional) Select Apply all inclusion rules below to data selections selected
above.
Use one of the following filter methods:
■ Fill in the Data selection name or Data selection description fields with
characters and wild cards (* and ?). This method filters the data selections
based on their names or their descriptions.
For example, assume that the following data selection names exist and
are selected under a department in the tree:

U_*.jpg_files
W_*.jpg_files
W_*.xls_files

If you type *files in the Data selection name field, the policy backs up
all three data selections.
If you type W* in the field, the data selections for this policy include only
the data selections that are named W_*.jpg_files and W_*.xls_files.
For more information about filtering, see the following:
See the PureDisk Backup Operator’s Guide.
■ (Conditional) Select a template from the Data selections based on template
drop-down box to specify an existing data selection template to use.
Exporting data to NetBackup 89
Creating or editing an export to NetBackup policy

This template applies to Files and Folders data selections only. The data
selection uses the data selection rules from the template. These rules
determine the files and directories to back up.
You can select any data selection that you previously applied to the client.

Completing the Scheduling tab for an Export to NetBackup policy


Use the following procedure to complete this tab.
To complete the Scheduling tab
1 Select hourly, daily, weekly, or monthly to specify how frequently you want
this policy to run.
PureDisk runs the policy at the time you specify in the storage pool’s time
zone.
2 Select the schedule details.
Base your selections on the frequency that you specified in the previous step.

Completing the Parameters tab for an Export to NetBackup policy


The following section explains how to complete this tab.
To complete the Parameters tab
1 In the NetBackup Policy Name field, specify the name of the NetBackup
DataStore policy.
For more information about configuring NetBackup to accept PureDisk data
selections, see the following:
See “Configuring NetBackup to receive data exported from PureDisk ”
on page 79.
2 In the PureDisk to NetBackup export engine field, select the PureDisk node
where you installed the NetBackup client and the PureDisk NetBackup export
engine.
The node appears on the drop-down list.

(Optional) Completing the Metadata tab for an Export to NetBackup


policy
Use the following procedure to complete this tab if you want to exclude certain
files from the export. If you want to export entire data selections, do not specify
anything on this tab.
90 Exporting data to NetBackup
Creating or editing an export to NetBackup policy

To complete the Metadata tab


1 Click Add to define a metadata inclusion rule for the policy.
You can specify any or all filters. A file must fulfill all of the specified rules
before PureDisk includes the file in the data selection.
2 In Rule name, type a name for this filter.
Tip: Use a U or a W as the first character in the filter name. It helps you to
identify whether the filter is for a UNIX or a Windows client.
3 In Folder name, type the complete path or a pattern of the folder where the
files reside.
Tip: You can use characters and wildcards to specify both absolute folder
patterns and relative folder patterns.
4 In File name field, type a pattern that describes the files.
For more information and examples about filtering, see the PureDisk Backup
Operator's Guide.
5 In Size, use the drop-down lists to specify a filter.
Base the choice on the size of the file. For example:
■ You can select a file size that is greater than or equal to 100 bytes and less
than or equal to 500 bytes. In this case, PureDisk includes files that are
from 100 bytes to 500 bytes.
■ You can select a file size that is less than or equal to 500 bytes. In this
case, PureDisk includes files that are from 1 byte to 500 bytes in length.
■ You can select a file size that is greater than or equal to 100 bytes. In this
case, PureDisk includes files that are 100 bytes or longer.

6 In Last Modification, use the drop-down lists to specify a filter.


Base this choice on the time the file was last modified. Select a date-based or
time-based boundary.
For example, if you specify January 20, 2009, in the Before field, the filter
includes all files that were modified before 12:01 AM on January 20, 2009.
The filter includes the files that were modified any time of day on January
19, 2009.
7 Click OK.
Exporting data to NetBackup 91
Running an export to NetBackup policy

Caution: The filters in the Metadata tab of an Export to NetBackup policy let you
narrow the list of files that you want PureDisk to export. If you do not define
filters, PureDisk exports all the files. When you specify filters in an Export to
NetBackup policy, you might encounter occasional problems when you browse
files in NetBackup Backup and Restore interface. The problems can occur with
the NetBackup images that PureDisk creates. If you do not find your exported
files when you use NetBackup's Backup and Restore interface, use the bplist(1M)
NetBackup command line utility.

To edit or delete a metadata inclusion rule


1 Select a rule.
2 Select Edit and make changes to the filter rule, or select Remove.

Running an export to NetBackup policy


The export to NetBackup policy runs according to the schedule that you specified
in the PureDisk policy when you created it.
The following procedure explains how to run the policy one time for a point-in-time
export. To complete this procedure, you must have PureDisk restore permissions.
To run an export to NetBackup policy one time
1 Click Manage > Policies.
2 In the left pane, under Data Management Policies, expand Export to
NetBackup.
3 Select the policy you want to run.
4 In the right pane, click Run Policy.
The PureDisk Backup Operator’s Guide explains how to enable policy
escalation actions and how to monitor policy runs. NetBackup writes log files
of its activity to /usr/openv/netbackup/logs/pdexport.

Performing a point-in-time export to NetBackup


Use the following procedure to perform a one-time export of a backed up PureDisk
Files and Folders data selection to NetBackup. If you use this method, you still
need to define a NetBackup DataStore policy. However, you do not need a PureDisk
export to NetBackup policy with this method.
92 Exporting data to NetBackup
Troubleshooting export job failures

To perform a point-in-time export to NetBackup


1 Verify that you have a NetBackup DataStore policy defined.
See “Configuring NetBackup to receive data exported from PureDisk ”
on page 79.
2 Click Manage > Agents.
3 In the left pane, expand the tree to display the data selection that you want
to export.
4 Select the data selection that you want to export.
5 In the right pane, click Export Whole Data Selection.
6 Specify the following information in the Export files from agent dialog box:

Date The date when the data selection to be exported to


NetBackup was backed up.

Time The time when the data selection to be exported to


NetBackup was backed up.

NetBackup Policy Name Specify the name of the NetBackup DataStore policy.

PureDisk to NetBackup Select the PureDisk NetBackup export engine in your


Export Engine storage pool that you want to use to perform the
export.

7 Click Export files.

Caution: If the export job finds no backups that match the date specified, the
job runs and shows successful completion, but PureDisk exports nothing to
NetBackup. This behavior differs from the behavior for regular backups
because regular backups fail if nothing is backed up.

Troubleshooting export job failures


The following information can be helpful in troubleshooting NetBackup export
jobs:
■ See “NetBackup export engine log files” on page 93.
■ See “Problems with inactive server agents” on page 93.
Exporting data to NetBackup 93
Troubleshooting export job failures

NetBackup export engine log files


To enable logging for NetBackup export jobs, change the configuration file settings.
The following procedure explains the configuration file fields to change. For
general information about how to edit configuration files, see the following:
See “About the configuration files” on page 321.
To enable logging
1 Click Settings > Configuration > Configuration File Templates > PureDisk
Server Agent > Default ValueSet for PureDisk Server Agent > debug >
logging.
Set logging to either info (default), debug, or trace.
2 Click Settings > Configuration > Configuration File Templates > PureDisk
Server Agent > Default ValueSet for PureDisk Server Agent > debug > trace.
The trace field specifies the directory to which PureDisk writes the log file.
The default is /Storage/log. Edit the trace field to specify an alternative
directory.

Problems with inactive server agents


PureDisk cannot run any job that originates from a node with an inactive server
agent. If you have more than one NetBackup export engine, and an export job fails
to run, ensure that all appropriate server nodes are active.
Specifically, PureDisk cannot run an export job if all of the following conditions
exist when you start the job:
■ You installed two or more NetBackup export engines on different PureDisk
nodes. PureDisk lets you install more than one NetBackup export engine in a
storage pool. However, if the following three conditions also exist, an export
job fails.
■ The server agent on one of the nodes that hosts a NetBackup export engine is
inactive.
■ You have export policies for each of the NetBackup export engines.
■ Each export policy exports the same data selection.
In this situation, PureDisk cannot run the policies to export that specific data
selection. Even the export jobs that are scheduled to run on nodes with active
server agents fail to run.
The server agent must always be active in order for PureDisk to run jobs for that
node. However, in this scenario, an inactive server agent can affect the scheduled
jobs on an active server agent.
94 Exporting data to NetBackup
Copying or deleting an export to NetBackup policy

To avoid this situation, ensure that each server agent on each node that hosts a
NetBackup export engine is activated.
To activate a server agent
1 Click Settings > Topology.
2 Expand the tree in the left pane so that it shows all the PureDisk services.
3 Select the service you want to start.
For example, select NBU Export Engine.
4 In the right pane, click Activate NetBackup Export Engine.

Copying or deleting an export to NetBackup policy


The following procedure explains how to copy or delete a policy that exports data
to NetBackup.
To edit, copy, or delete a policy
1 Click Manage > Policies.
2 In the tree pane, expand Data Management Policies > Export to NetBackup.
3 Select a policy.
4 In the right pane, click the action you want to perform.
Select from one of the following:
■ Delete Policy.
■ Copy Policy.
A copy of the policy appears in the tree. The policy is in the disabled state.

Restoring from NetBackup


NetBackup treats PureDisk data selections as if they were regular NetBackup files.
During the export process, NetBackup creates catalog information. The NetBackup
catalog is a record of files and the names of the clients upon which the files
originated. When you export data from PureDisk to NetBackup, the client name
that NetBackup uses is the name of the client upon which the files originated.
Example 1. Figure 4-1 shows an export policy on node_3 that exports data from
speedy to NetBackup. In NetBackup, the client name that appears in the NetBackup
catalog is speedy.
Exporting data to NetBackup 95
Restoring from NetBackup

Example 2. If you replicate a data selection and then export that data selection
from the destination storage pool, the destination storage pool displays the source
client's name. The source client's name appears in the following format:
[R] client_name (agx,stpy)

In the preceding client name format, the following are replaced:


■ client_name is the name of the client.
■ x is the agent identifier.
■ y is the storage pool identifier.
When you export a replicated data selection from the destination storage pool to
NetBackup, NetBackup removes the [R] characters at the beginning, the agent
ID at the end, and the storage pool ID at the end.
For example, assume the following series of events:
■ You have two storage pools: my_spa and your_spa. A client named clientA is
attached to my_spa.
■ You replicate clientA's backed-up data selections from my_spa to your_spa.
If you look at your_spa's Web UI, clientA appears as follows:
[R] clientA (ag5,stp123).

■ You configure your_spa as a NetBackup client.


■ You export clientA's data selections from your_spa to NetBackup.
■ You restore clientA's data selections from NetBackup. When you want to
restore the data selections from clientA, look for a client named clientA in
the NetBackup interface.
The NetBackup job monitor displays the name of the node that hosts the PureDisk
NetBackup export engine and the NetBackup client. In Figure 4-1, the name that
appears in the job monitor depends on which export engine did the export. The
name is node_1 or node_3.
In the NetBackup catalog, the NetBackup policy that exports data from PureDisk
to NetBackup is called a PureDisk-Export policy. The policy type number is 38.
For more information about how to restore files from NetBackup, see the following:
See the NetBackup Administrator’s Guide, Volume I.
96 Exporting data to NetBackup
Restoring to a PureDisk client that is not a NetBackup client

Restoring to a PureDisk client that is not a NetBackup


client
The following procedure uses general terms to describe how to restore files from
NetBackup. This procedure assumes that you have not installed the NetBackup
client software on the PureDisk client.
To restore the files to the PureDisk environment
1 Log on to the PureDisk node that hosts the NetBackup client software.
If more than one node hosts NetBackup client software, log on to the node to
which you want to write the files.
2 Create a restore directory.
For example:

# mkdir restoredir

3 From the NetBackup administration console, use the NetBackup Backup,


Archive, and Restore interface to perform a client-redirected restore.
The client-redirected restore is needed for the following reasons:
■ You can restore only to a system that hosts a NetBackup client.
■ The name of the NetBackup client appears in the NetBackup catalog. Note
that it is not the PureDisk client.
Refer to Figure 4-1. This example shows that the clients speedy and kwiek
appear in the NetBackup catalog.
More information on how to enable a client-redirected restore is available.
See the NetBackup Administrator’s Guide, Volume I.
For example, assume the following:
■ The PureDisk environment is depicted in Figure 4-1.
■ The name of the NetBackup policy that performed the export was
PDExport.

■ The name of the NetBackup master server is NBUMasterServer.


■ You want to write the files to their original location on the PureDisk node
that is defined as the PureDisk NetBackup export engine (node_3). This
location was /bin/myfiles.
More information about restoring files to an alternate directory.
See the NetBackup Administrator’s Guide, Volume I
Exporting data to NetBackup 97
Restoring to a PureDisk client that is also a NetBackup client

4 Use a network method to move the files from the PureDisk node with the
NetBackup client software to the PureDisk client that needs the files.
For example, you can use FTP to transfer the file.
This step writes the files to the client, but it does not put the files under
PureDisk control. Perform the next step if you want to use PureDisk to back
up the files again, which puts them under PureDisk control.
5 (Optional) Use PureDisk to back up the files.
This step puts the files back into the PureDisk environment.
More information on how to perform a backup is available.
See the PureDisk Backup Operator’s Guide.

Restoring to a PureDisk client that is also a NetBackup


client
The following procedure uses general terms to describe how to restore files from
NetBackup. This procedure assumes that you have installed the NetBackup client
software on the PureDisk client.
To restore the files to the PureDisk environment
1 Log on to NetBackup.
2 From the NetBackup administration console, use the NetBackup Backup,
Archive, and Restore interface to perform a restore directly to the client.
3 (Optional) Use PureDisk to back up the files.
This step puts the files back into the PureDisk environment.
More information on how to perform a backup is available.
See the PureDisk Backup Operator’s Guide.
98 Exporting data to NetBackup
Restoring to a PureDisk client that is also a NetBackup client
Chapter 5
Disaster recovery backup
procedures
This chapter includes the following topics:

■ About disaster recovery backup procedures

■ About performing disaster recovery backups

■ About backing up your PureDisk environment using NetBackup

■ Configuring PureDisk disaster recovery backup policies

■ About backing up your PureDisk environment using scripts

■ Troubleshooting a disaster recovery backup

About disaster recovery backup procedures


A disaster recovery backup protects the data in your PureDisk environment. You
can back up your storage pool data to NetBackup, to a Samba share, or to a
third-party product. The PureDisk disaster recovery backup policies protect the
following types of PureDisk data:
■ The backup data on the content routers. The content routers are the repository
for backup files.
■ The spool area. This area is the buffer in which PureDisk stores data before it
writes the data to the content routers.
■ The PureDisk databases on the content routers, the metabase servers, the
metabase engines, and the storage pool authority. Most PureDisk services have
their own database, and there is one database management system that controls
all the service databases.
100 Disaster recovery backup procedures
About performing disaster recovery backups

■ The PureDisk software configuration on the content routers, the metabase


servers, the metabase engines, and the storage pool authority.
For information on disaster recovery backups, see the following:
■ See “About performing disaster recovery backups” on page 100.
■ See “About backing up your PureDisk environment using NetBackup”
on page 101.
■ See “About backing up your PureDisk environment using scripts” on page 114.
■ See “Configuring PureDisk disaster recovery backup policies” on page 106.
In addition to the preceding disaster recovery backup methods, PureDisk includes
Storage Pool Authority Replication (SPAR). SPAR replicates storage pool
configuration information from one all-in-one storage pool to another. For
information about SPAR, see the following:
See “About storage pool authority replication (SPAR)” on page 199.
If you need to recover from a disaster, use the PureDisk disaster recovery script,
DR_Restore_all.sh. The disaster recovery method is the same regardless of which
disaster recovery backup method you used. For information about the recovery
process, see the following:
■ See “About restoring an unclustered PureDisk environment” on page 121.
■ See “About restoring a clustered PureDisk environment” on page 157.

Note: To recover PureDisk when you have enabled the PureDisk deduplication
option (PDDO), see the PureDisk Deduplication Option Guide. It contains
PDDO-specific information, which includes how to avoid a potential data loss
situation.

About performing disaster recovery backups


Depending on your backup method, the disaster recovery backup process is as
follows:
■ If you use NetBackup, PureDisk uses NetBackup backup policies to copy the
data to NetBackup storage. This method requires that you install NetBackup
client software on each node in the storage pool.
See “About backing up your PureDisk environment using NetBackup”
on page 101.
■ If you use a script, PureDisk calls the script that you provide.
You can use one of the following methods:
Disaster recovery backup procedures 101
About backing up your PureDisk environment using NetBackup

■ You can write your own backup script.


■ You can use one of the scripts that PureDisk includes. You can also modify
one of these scripts.
■ You can write a script that invokes a third-party backup product.
See “About backing up your PureDisk environment using scripts”
on page 114.

When the disaster recovery backup policy runs, it preserves all data that you need
to restore a PureDisk environment in the event of a disaster. A disaster recovery
backup ensures that you can return your environment to its previous state.
The following processes occur when the disaster recovery policy runs:
■ It backs up the metadata in the storage pool.
This backup includes the following data:
■ Storage pool authority database
■ The database(s) for the metabase engines
■ The topology files

■ It backs up the database on the content routers.


■ It creates a list of files in the spool area.
For an incremental backup, it also creates a list of changed files and new files
on the content router. An incremental backup backs up new or changed content
router data from the last full backup.
Your last full backup must be available for incremental backups.
■ It backs up the client data by using the backup method you specify. Client data
includes the spool area, the new files, and the changed files on the content
routers.
■ It backs up the configuration files on all nodes.

About backing up your PureDisk environment using


NetBackup
When you use NetBackup to perform PureDisk disaster recovery backups, you
first need to create a NetBackup backup policy. PureDisk uses its own system
policies to send the data to NetBackup. You can restore the data back to PureDisk
by using the PureDisk disaster recovery scripts.
The following sections describe how to use NetBackup to back up your PureDisk
environment.
102 Disaster recovery backup procedures
About backing up your PureDisk environment using NetBackup

Prerequisites for NetBackup disaster recovery backups


Verify that your environment includes the following:
■ A separately mounted partition named /Storage. The separate mounting
enables high performance and is a requirement for disaster recovery backups.
■ A network connection between every PureDisk node and a NetBackup
environment. The NetBackup environment must be running NetBackup server
software at the 6.0 MP5 release level or greater.
■ A NetBackup client software package at the 6.0 MP5 release level or greater.
Install this client software on every PureDisk node in the storage pool.

Note: Make sure that the NetBackup client software version number is the
same as the NetBackup environment version number.

For more information about how to install the software, see the following:
See “Configuring the NetBackup client software” on page 102.

Configuring the NetBackup client software


The following procedure explains how to configure NetBackup client software.
To configure the NetBackup client software
1 Install the NetBackup Linux SUSE 2.6 client on each node in your PureDisk
storage pool.
If the storage pool is clustered, install the client on all nodes, including the
passive node.
For general information about how to install the NetBackup client, see the
NetBackup Installation Guide for UNIX and Linux.
When you install the client software, use the fully qualified domain name
(FQDN) for the client name. For example, answer n to the following short
name prompt:

Would you like to use "my_pdnode" as the


configured name of the NetBackup client? [y,n] (y) n
Enter the name of the NetBackup client: my_pdnode.my_domain.com

If you accept the short name during the install, edit the
/usr/openv/netbackup/bp.conf file on each node and change the line that
identifies the client. For example:
Disaster recovery backup procedures 103
About backing up your PureDisk environment using NetBackup

CLIENT_NAME=my_pdnode.my_domain.com

2 (Conditional) For each PureDisk node, create a file for the host FQDN and
another file for the service FQDN in the altnames directory on the NetBackup
master server.
Perform this step if the storage pool you want to back up is clustered.
This step is needed because the bp.conf file on each node contains the
physical host address. However, the backup process and the restore process
use the service address.
If necessary, create the altnames directory itself. Within the directory, create
a file of the following format for each node:

xxxxnode1.symc.be

Create these files for each node in the PureDisk storage pool.
Example 1.
To create the altnames directory on a UNIX master server, type the following
command:

# mkdir /usr/openv/netbackup/db/altnames

Example 2.
Assume that you want to create file names in the altnames directory for the
nodes in the following two-node cluster:
■ Node 1 = allinone.acme.com (host FQDN) and allinones.acme.com
(service FQDN)
■ Node 2 = passive.acme.com (host FQDN) and passives.acme.com (service
FQDN)
To create a file in the altnames directory of a UNIX master server, you type
the following commands:

# touch allinone.acme.com
# touch allinones.acme.com
# touch passive.acme.com
# touch passives.acme.com
104 Disaster recovery backup procedures
About backing up your PureDisk environment using NetBackup

For information about the altnames directory and creating files inside the
altnames directory, see the NetBackup Administrator’s Guide, Volume I.

3 Verify that the xinetd daemon is running on each node.


This service ensures proper communication between the NetBackup master
server and the Linux client. Type the following command to determine if
xinetd is running:

# ps -aef |grep xinetd

If it is not running, enter the following command:

# /etc/init.d/xinetd start

If you restart the system, type the following command to ensure that the
xinetd daemon starts:

# /sbin/insserv /etc/init.d/xinetd

Enabling NetBackup for PureDisk backups


The procedure in this topic uses general terms to describe how to create NetBackup
policies for backing up PureDisk data. For more information about the specific
tasks you need to perform when you create these policies, see the following:
See the NetBackup Administrator’s Guide, Volume I.
To enable NetBackup to back up a PureDisk storage pool
1 Make sure that you have accurate topology and node identification
information for this storage pool.
This information might be needed during a disaster recovery. Make sure that
the information on the cluster planning spreadsheet (for clustered storage
pools) or on the installation worksheets (for unclustered storage pools) is
accurate.
2 Log on to the NetBackup administration console.
3 Create a NetBackup Standard policy with a user backup schedule.
This policy is for the PureDisk content router and spool area data.
When you create the NetBackup policies, note the following rules:
■ Observe the NetBackup policy naming rules.
Disaster recovery backup procedures 105
About backing up your PureDisk environment using NetBackup

For information about NetBackup policy names, see the following:


See “About NetBackup policy names” on page 106.
Remember the name of these NetBackup policies. You specify these names
again when you create the PureDisk disaster recovery backup policies.
When you create the NetBackup Standard policy, use the same FQDN for
the client that you specified when you installed the client. For example,
use my_pdnode.my_domain.com to identify the client. The client name
must appear identically in the NetBackup policy and in the bp.conf file
on the PureDisk node.
Make sure that all PureDisk nodes are included in the client list in the
NetBackup Standard policy.
If the storage pool is clustered, specify the node service FQDN in the
NetBackup Standard policy. Do not specify the host address.
If your PureDisk environment is set up with /Storage mounted on an NFS
share, make sure to check the option Follow NFS in the NetBackup
Standard policy definition.
■ Entries are not required on the NetBackup Backup Selections tab.
■ Make sure that the schedule allows backups 24 hours a day and seven
days a week. This method allows PureDisk to send data to NetBackup at
any time.

4 Create a NetBackup DataStore policy with an application backup schedule.


This policy is for the PureDisk metadata. Observe the same naming rules
regarding NetBackup policies, FQDNs, and so on as described in the following:
See “About NetBackup policy names” on page 106.
Make sure that all PureDisk nodes are included in the client list in the
NetBackup DataStore policy.
5 Use the PureDisk Web UI to edit the following policies:
■ The System policy for full DR backup
■ The System policy for incremental DR backup
PureDisk includes these policies by default. When you edit these policies,
enable them, specify information specific to your site, and optionally, create
policy escalation actions for them.
Make sure that all PureDisk nodes are included in the client list in the
NetBackup DataStore policy.
For more information about how to create PureDisk disaster recovery backup
policies, see the following:
See “Configuring PureDisk disaster recovery backup policies” on page 106.
106 Disaster recovery backup procedures
Configuring PureDisk disaster recovery backup policies

About NetBackup policy names


The PureDisk Web UI does not verify that the policy names you enter comply with
the NetBackup naming conventions. As a consequence, you can possibly enter a
policy name in the disaster recovery backup policy that is not valid for NetBackup.
Avoid this situation.
NetBackup enforces the following naming conventions for its policies:
■ Policy names cannot start with a dash (-).
■ Policy names cannot include space characters.
■ Policy names cannot include the characters / @ # * & ^

Configuring PureDisk disaster recovery backup


policies
PureDisk includes the following disaster recovery backup policies:
■ A full disaster recovery backup policy, which backs up the entire storage pool.
■ An incremental disaster recovery backup policy, which backs up the
information that changed since the last full or incremental backup ran.
You might need to experiment with your disaster recovery backup policy schedules
for both full backups and incremental backups. A full backup takes longer to
complete than an incremental backup.
For example, you might run a full disaster recovery backup one time each week.
You might run incremental disaster recovery backups on the other days of the
week.
The exact schedule depends on several factors:
■ How much data is in your storage pool
■ How frequently you need to restore data
■ How quickly you need to restore data
The following procedure explains how to enable the backup policies.
Disaster recovery backup procedures 107
Configuring PureDisk disaster recovery backup policies

To enable PureDisk disaster recovery backup policies


1 Log on to the storage pool and start the Web UI.
2 Verify that you configured NetBackup or scripts to back up your PureDisk
storage pool. Make sure that one of these backup structures is in place before
you enable the PureDisk disaster recovery backup policies.
For more information about how to create backups, see the following:
See “About backing up your PureDisk environment using NetBackup”
on page 101.
See “About backing up your PureDisk environment using scripts” on page 114.
3 Click Manage > Policies.
4 In the left pane, under Storage Pool Management Policies, expand Disaster
Recovery Backup.
5 Select one of the following policies:
■ System policy for full DR backup
■ System policy for incremental DR backup
The procedures for enabling these policies are identical. In addition, the tabs
and fields in the Web UI are the same for each of these policy types. These
policies differ only in the type of backup that PureDisk runs when you enable
them.
6 Complete the General tab.
See “Completing the General tab for a disaster recovery backup policy”
on page 108.
7 Complete the Scheduling tab.
See “Completing the Scheduling tab for a disaster recovery backup policy”
on page 109.
8 Complete the Parameters tab.
See “Completing the Parameters tab for a disaster recovery backup policy”
on page 109.
9 Click Save when done.
108 Disaster recovery backup procedures
Configuring PureDisk disaster recovery backup policies

10 (Optional) Create a policy escalation action for this policy.


The policy escalation action defines the event escalation email message and
its recipients. Make sure to associate the escalation action with the policy.
For more information, see the PureDisk Backup Operator's Guide.
11 (Optional) Repeat the preceding steps to configure the other policy type.
For example, after you configured a System policy for full DR backup, repeat
the preceding steps but select System policy for incremental DR backup.

Completing the General tab for a disaster recovery backup policy


The following procedure explains how to complete the General tab.
To complete the General tab
1 (Optional) Type a new name for this policy in the Name field.
Perform this step only if you want to rename this policy.
2 Select Enabled or Disabled.
This setting lets you control whether PureDisk runs the policy according to
the Scheduling tab, as in the following situations:
■ If you select Enabled, PureDisk runs the policy according to the schedule.
After you enable a policy, you can run the policy on a schedule or manually.
■ If you select Disabled, PureDisk does not run the policy according to the
schedule. If a policy is disabled, PureDisk cannot run the policy according
to a schedule, and you cannot run the policy manually. This selection is
the default.
For example, if you want to stop running the policy during a system
maintenance period select Disabled. You do not need to enter information
in the Scheduling tab to suspend and later reenable the policy.

3 (Optional) Select times in the Escalate warning after or the Escalate error
and terminate after drop-down lists.
These times specify the elapsed time before PureDisk sends an email message.
PureDisk can notify you if a backup does not complete within a specified time.
These fields allow you to define the times for escalation actions.
For example, you can configure PureDisk to send an email message to an
administrator if the policy does not complete in eight hours.
Disaster recovery backup procedures 109
Configuring PureDisk disaster recovery backup policies

Completing the Scheduling tab for a disaster recovery backup policy


The disaster recovery backup policy runs according to the schedule you specify
when you edit the policy. The first time you run a disaster recovery backup policy,
PureDisk performs a full backup. By definition, subsequent disaster recovery
backups are incremental backups.
See “Configuring PureDisk disaster recovery backup policies” on page 106.
The following procedure explains how to complete this tab.
To complete the Scheduling tab
◆ Select hourly, daily, weekly, or monthly schedule to specify how frequently
you want this policy to run.
You can also specify an exact start time to run the schedule.
The following are additional notes on scheduling disaster recovery backups:
■ Symantec recommends that you run disaster recovery backups when other
backups are not running. You can run a disaster recovery backup at the same
time that regular system backups run. However, you cannot restore the data
you backed up during the regular system backup.
■ When you run a full disaster recovery backup, content router performance
degrades. This issue is due to increased file system activity. In extreme cases,
a full backup can cause regular backup jobs to fail. Schedule full backups during
a time when other backups are not running.
■ You can customize the schedule to suit your site’s needs. You can experiment
with different schedules. You want to balance the frequency with which this
policy runs and system resource usage.
■ If you want to run the policy only one time, select an active policy and click
Run Policy from the left pane. After the policy runs, open the General tab
again and disable the policy. If you do not disable the policy, it runs again
according to the schedule that you specified in the Scheduling tab.
■ If you use NetBackup as your backup tool, make sure that you schedule the
PureDisk backup policies to run during NetBackup’s open window. Symantec
recommends that you specify the NetBackup schedule to allow backups 24
hours a day and seven days a week.

Completing the Parameters tab for a disaster recovery backup policy


This tab is divided into three sections. Each represents one of the three possible
methods you can use to back up your PureDisk data.
110 Disaster recovery backup procedures
Configuring PureDisk disaster recovery backup policies

Note:
Choose only one method to back up your PureDisk data (NetBackup, Samba Share,
or Third Party Product). Then, complete all of the information fields for that
method. Do not complete any fields for the methods that you do not choose.

To choose a disaster recovery backup method


◆ Choose one of the following disaster recovery backup methods and follow
the procedure associated with that backup method:
■ NetBackup. For information about how to complete the Parameters tab
for a NetBackup backup, see the following:
See “Completing the Parameters tab on a Disaster recovery policy to back
up PureDisk to a NetBackup environment” on page 110.
■ Samba. For information about how to complete the Parameters tab for a
NetBackup backup, see the following:
See “Completing the Parameters tab on a Disaster recovery policy to back
up PureDisk to a Samba file system” on page 111.
■ Third party. For information about how to complete the Parameters tab
for a NetBackup backup, see the following:
See “Completing the Parameters tab on a Disaster recovery policy to back
up PureDisk to a third-party product” on page 112.

Completing the Parameters tab on a Disaster recovery policy


to back up PureDisk to a NetBackup environment
The following procedure explains how to complete the Parameters tab to back
up PureDisk to a NetBackup environment.
Make sure you are familiar with how to create NetBackup policies. For information
about how to create NetBackup policies, see the NetBackup Administrator’s Guide,
Volume I.
The following procedure describes how to use NetBackup to back up PureDisk
data.
To use NetBackup to back up PureDisk
1 Select Use NetBackup.
2 In the Standard Policy field, type the name of the NetBackup standard policy
that you configured to back up this data.
3 In the DataStore Policy field, type the name of the NetBackup DataStore
policy that you configured to back up this data.
Disaster recovery backup procedures 111
Configuring PureDisk disaster recovery backup policies

4 In the Number of Parallel streams drop-down list, select the number of


parallel streams that you specified when you created the NetBackup Standard
policy.
5 Click Save.

Completing the Parameters tab on a Disaster recovery policy


to back up PureDisk to a Samba file system
The following procedure describes how to complete the Parameters tab to back
up PureDisk data to a Samba file system on another computer.
To use a Samba share to back up PureDisk
1 Make sure that Samba is configured on the computer to which you want to
write the PureDisk backups.
2 Select Use Samba Share.
3 In the Full path of data backup program field, specify the full path and the
name of the backup script that you want to use.
The /opt/pdconfigure/scripts/support/DR_BackupSampleScripts/
directory contains sample disaster recovery backup scripts.
Symantec recommends that you use the following scripts:
■ full_DR_backup.sh

■ incremental_DR_backup.sh

4 In the Directory Path Name field, specify the full path (mount point) to a
directory in which to write the backed up files.
Specify /DRdata in this field if the following are both true:
■ You used the full_DR_backup.sh script or the incremental_DR_backup.sh
script.
■ You did not modify the scripts.
These scripts write to /DRdata. The write occurs even if the directory is
mounted to another disk or partition. The script mounts to the directory
you specify and writes to it.
Specify your own directory in this field if either of the following are true:
■ You do not use the full_DR_backup.sh script or the
incremental_DR_backup.sh script.

■ You modified the scripts to write to a different directory. Make sure that
your backup scripts write to the directory you specify. PureDisk does not
mount this directory.
112 Disaster recovery backup procedures
Configuring PureDisk disaster recovery backup policies

5 In the Share Name field, specify the name of a remote Samba shared file
system.
Use the following format for the shared file system:

//hostname/sharename

These variables are as follows:

hostname Specify the host name or IP address upon which the target
shared directory resides.

sharename Specify the name of the shared directory on hostname.

For example: //100.100.100.101/pde_dr_files


6 In the Workgroup / Domain field, specify the domain name.
7 In the User Name field, specify the Samba user name.
8 In the Password field, specify the Samba password.
9 Select Use Encryption to have PureDisk encrypt the configuration data before
it writes the data.
Use Encryption does not cause segment data to be automatically encrypted.
10 Click Save.
11 (Conditional) Update or verify the storage pool’s topology information.
Perform this step if you selected Use Encryption in the previous step.
If you perform the backup with encryption enabled, make sure that you have
accurate topology and node identification information. This information is
needed during a disaster recovery. Make sure that the information on the
cluster planning spreadsheet (for clustered storage pools) or on the installation
worksheets (for unclustered storage pools) is accurate.
Click Settings > Topology to examine the storage pool's topology.

Completing the Parameters tab on a Disaster recovery policy


to back up PureDisk to a third-party product
The following procedure describes how to complete the Parameters tab to back
up PureDisk data to a third-party product.
Disaster recovery backup procedures 113
Configuring PureDisk disaster recovery backup policies

Caution: If you choose this method, be aware that you need to copy your backups
to a secondary host. If the primary host fails, you are likely to lose both the original
files and the backed up files that are written to the local directory.

To use a third-party product to back up PureDisk


1 Select Use Third Party Product.
2 In the Full path of data backup program field, specify the full path and the
name of the backup script you want to use.
The /opt/pdconfigure/scripts/support/DR_BackupSampleScripts/
directory contains sample disaster recovery backup scripts.
Symantec recommends you use the following scripts:
■ full_DR_backup.sh

■ incremental_DR_backup.sh

3 In the Directory Path Name field, specify the full path to the directory in
which to write the backed up files.
If you modified or did not use the full_DR_backup.sh script or the
incremental_DR_backup.sh script, specify your own directory in this field .

If you used the full_DR_backup.sh script or the incremental_DR_backup.sh


script and did not modify them, specify /DRdata in this field.
These scripts write to /DRdata. The write occurs even if the directory is
mounted to another disk or partition. The script mounts to the directory you
specify and writes to it.
4 Select Use Encryption to have PureDisk encrypt the configuration data before
it writes the data.
Use Encryption does not cause segment data to be automatically encrypted.
5 Click Save.
6 (Conditional) Update or verify the storage pool’s topology information.
Perform this step if you selected Use Encryption in the previous step.
If you perform the backup with encryption enabled, make sure that you have
accurate topology and node identification information. This information is
needed during a disaster recovery. Make sure that the information on the
cluster planning spreadsheet (for clustered storage pools) or on the installation
worksheets (for unclustered storage pools) is accurate.
Click Settings > Topology to examine the storage pool's topology.
114 Disaster recovery backup procedures
About backing up your PureDisk environment using scripts

About backing up your PureDisk environment using


scripts
The following sections discuss how to use scripts for disaster recovery.
Your options are as follows:
■ Use one of the sample scripts that PureDisk provides. Alternatively, you can
customize these scripts for your own site’s use.
■ Write your own backup script. This script can invoke a third-party backup
tool.

Prerequisites for script-based disaster recovery backups


Before you enable a disaster recovery policy, identify and create a repository for
the backup files that this policy creates. The repository can be a local file system
or a Samba shared file system.
For example, you can back up your PureDisk environment to a file system on a
server that is outside of the PureDisk environment. Such a file system must be
within the Samba or shared file system.
If you write to a shared file system outside of the PureDisk storage pool, make
sure of the following:
■ The server to which you want to write the PureDisk backup is connected to
the network.
■ A Samba shared file system is mounted on the computer to which you want
to write the backup.
■ The /Storage partition is mounted as a separate partition. The separate
mounting enables high performance and is a requirement for disaster recovery
backups.

PureDisk’s disaster recovery backup or restore script examples


The PureDisk installation software includes example backup scripts and example
restore scripts. You do not need to modify them for use at your site.
However if you want to modify the scripts, examine the comments in the script
files. The comments explain how to modify each one. The example scripts reside
in the following directory:

/opt/pdconfigure/scripts/support/DR_BackupSampleScripts/
Disaster recovery backup procedures 115
About backing up your PureDisk environment using scripts

Table 5-1 lists the scripts that are located in this directory and describes their
functions.

Table 5-1 Script examples

Script name Script function

full_DR_backup.sh Performs a full backup of PureDisk environment


data.

incremental_DR_backup.sh Performs an incremental backup of PureDisk


environment data.

DRrestore.sh Restores data about the PureDisk environment


that was saved to a Samba share or was saved by
a third-party product.

For more information about how to restore an


unclustered PureDisk environment, see the
following:

See “About restoring an unclustered PureDisk


environment” on page 121.

For more information about how to restore a


clustered PureDisk environment, see the
following:

See “About restoring a clustered PureDisk


environment” on page 157.

If you modify the scripts that PureDisk provides, the scripts are not protected.
During a restore procedure, PureDisk overwrites the scripts if they remain in the
default installation directory (/opt). You must place them in another directory
for protection (for example, in /usr or /tmp).

Creating a backup script


A disaster recovery backup script can back up the files directly, or it can run a
third-party backup tool. The following procedure explains how to create a disaster
recovery script to back up the spool, new, and changed data on the content routers.
If you create your own scripts, the scripts are not protected. During a restore
procedure, PureDisk overwrites the scripts if they remain in the default installation
directory (/opt). You must place them in another directory for protection (for
example, in /usr or /tmp).
116 Disaster recovery backup procedures
About backing up your PureDisk environment using scripts

To create a backup script


1 Use a text editor to create a backup script that backs up the data directly or
calls a backup product.
The backup script must perform the actual backup of the data. When you
create the script, include the following options:

--new listfile Used for an incremental backup.

Specifies that you want to back up the new


container directories that were created since the
last disaster recovery backup was performed.

--changed listfile Used for an incremental backup.

Specifies that you want to back up the container


directories that have changed since the last
disaster recovery backup ran.

--full listfile Used for a full backup.

Specifies that you want to back up all container


directories.

--spool spool_files The spool area on a content router is a buffer


area. It holds file data until PureDisk writes the
data to the content router.

--agentid agent_id Specifies the node which is to be backed up. Each


node that runs this script supplies its own agent
identifier.

Do not create actual files for listfile or spool_files. The disaster recovery
workflow creates these files and provides them to the script.
If you run an incremental backup, and no full backup exists, PureDisk
performs a full backup.
The disaster recovery backup policy calls the script and runs it each time
with a different option, in the following order:

--new Backs up newly created container directories

--changed Backs up the existing container directories that


have changed

--full Backs up both the new container directories and


the existing container directories that have
changed
Disaster recovery backup procedures 117
Troubleshooting a disaster recovery backup

--spool Backs up any files in the content router’s spool


area

--agentid Specifies the agent identification of the node


that is being backed up

2 Copy the script you created to every content router in your environment.
Write this script to the same location on each content router. For example,
/opt/external_scripts.

3 Copy the scripts to a backup directory for protection.


PureDisk overwrites the scripts during a restore, so write a copy to /usr or
/tmp.

Troubleshooting a disaster recovery backup


The following describe how to troubleshoot failed disaster recovery backups:
■ See “Missing pdkeyutil file” on page 117.
■ See “Content router modes set incorrectly” on page 118.

Missing pdkeyutil file


The pdkeyutil command enables encrypted disaster recovery backups to a local
file system or to a Samba shared file system.
One possible reason for disaster recovery backup job failure is the absence of a
pdkeyutil file. The installation procedure includes a step to enable this utility.
If your backup failed, it might be because you did not enable this utility at
installation time.
If your backup fails to run, examine the job details. To examine the job details,
click Monitor > Jobs. In the right pane, click the Job Id number on the row that
includes the disaster recovery backup workflow job.
If the pdkeyutil file does not exist, the following message appears in the Job log
tab:

open(/Storage/var/keys/DR.key, ...) failed; No such file or directory (2)

To enable the pdkeyutil command, enter the following command on all active
nodes:
118 Disaster recovery backup procedures
Troubleshooting a disaster recovery backup

# /opt/pdag/bin/pdkeyutil -insert

The preceding command initiates a dialog session with the pdkeyutil utility. The
utility prompts you to specify a password for the encryption utility to use during
disaster recovery backups and restores.
Remember the password that you type. You need this password to restore PureDisk
storage pool authority configuration files in the event of a disaster.
If you do not remember this password, you cannot complete the restore.
When you perform a restore, make sure to use the password that was in effect
when your disaster recovery backup ran. If you have changed the password, ensure
that the password you specify is the same password that was used when the
disaster recovery backup ran.
To determine if the key is enabled, enter the following command:

# /opt/pdag/bin/pdkeyutil -display

This command displays the following if the key is enabled:

Key File version: 0


Key count: 1
1. Key creation date: 2006-08-29 (current key)
Key: 1bdd79b5dee3f6a80679893236e9194c

This command displays the following if no key is enabled:

Key File version: 0


Key count: 0

Content router modes set incorrectly


After a failed disaster recovery backup, the disaster recovery backup workflow
attempts to reset the content router modes back to normal. However, depending
on the nature of the failure, these attempts can be unsuccessful.
After a failed disaster recovery backup, perform the following procedure.
To reset content router modes
1 Log into one of the content router nodes as root.
2 Type the following command:

# /opt/pdcr/bin/crcontrol --getmode
Disaster recovery backup procedures 119
Troubleshooting a disaster recovery backup

3 Examine the crcontrol command's output.


If the modes are set correctly, the output indicates that all modes =YES except
for REROUTE mode, which should be set to =NO.
The following output indicates that the modes are set correctly:

Mode : GET=Yes PUT=Yes DEREF=Yes SYSTEM=Yes STORAGED=Yes REROUTE=No

If the crcontrol command output for your content router is not set correctly,
type the following command to set one or more modes manually:

# /opt/pdcr/bin/crcontrol -mode mode=Yes

For mode, type one of the following: GET, PUT, DEREF, SYSTEM, or STORAGED.
For example, if DEREF=Hold in your output, type the following command:

# /opt/pdcr/bin/crcontrol -mode DEREF=Yes

Do this for each mode that is not correctly set.


4 Repeat this procedure for each content router node in your storage pool.
120 Disaster recovery backup procedures
Troubleshooting a disaster recovery backup
Chapter 6
Disaster recovery for
unclustered storage pools
This chapter includes the following topics:

■ About restoring an unclustered PureDisk environment

■ Reinstalling required software (unclustered recovery)

■ Performing a disaster recovery of an unclustered PureDisk storage pool from


a NetBackup disaster recovery backup (NetBackup, unclustered recovery)

■ Performing a disaster recovery from a Samba backup (Samba, unclustered


recovery)

■ Performing a disaster recovery from a third-party product backup (third-party,


unclustered recovery)

About restoring an unclustered PureDisk environment


Perform the procedures in this chapter when other methods to recover data have
failed. No matter how frequently you have performed disaster recovery backups,
data loss is possible with any restore procedure. The following introductory
information introduces the PureDisk procedures that explain how to restore an
unclustered PureDisk environment:
■ See “When to restore your environment” on page 122.
■ See “Restore overview for an unclustered storage pool” on page 122.
For information about how to perform disaster recovery backups and how to
create PureDisk disaster recovery backup policies, see the following:
■ See “About performing disaster recovery backups” on page 100.
122 Disaster recovery for unclustered storage pools
About restoring an unclustered PureDisk environment

■ See “Configuring PureDisk disaster recovery backup policies” on page 106.

When to restore your environment


Restore your PureDisk environment when one or more of the following conditions
are present:
■ Other methods to restore an unclustered storage pool have failed.
■ One or more of your PureDisk nodes is down. That is, the hardware does not
function and cannot be repaired.
■ Disks have crashed.
■ One or more of the databases appears to be corrupted.
To determine their state, examine the log file in the following file:

/Storage/log/pddb/postgresql.log

Possible signs of a corrupted database are messages such as the following in


the log file:

FATAL: could not open file


"/Storage/databases/pddb/data/global/1262": No such file or
directory

LOG: could not open temporary statistics file


"/Storage/databases/pddb/data/global/pgstat.tmp.4391": No such
file or directory

Restore overview for an unclustered storage pool


Regardless of the method you used to perform the disaster recovery backup, you
can use the DR_Restore_all.sh script to perform restores.
The PureDisk initiates the disaster recovery restore procedure when you type the
following command on the storage pool authority:

/opt/pdinstall/DR_Restore_all.sh

The DR_Restore_all.sh script performs the following actions:


■ Prompts for all of the information that is required to restore an entire storage
pool.
■ Restores the data that was backed up.
Disaster recovery for unclustered storage pools 123
Reinstalling required software (unclustered recovery)

■ Optimizes the content router restore. This action occurs when the disaster
affected only a subset of the content routers.
The DR_Restore_all.sh script fully restores the /Storage/data directory of
all failed content router nodes.
If the configuration includes any content routers that do not need to be fully
recovered because no disaster occurred, the script performs minimal restores.
The restores bring the content routers to a state that is consistent with the
point in time of the last disaster recovery backup. Since the last backup was
done, some data segments might have been removed or added.
In these cases, the script does the following:
■ Restores all databases and configuration files.
■ Restores the segment containers so that they are consistent with the content
router databases.
■ Restores all segment containers that a removal job has changed or deleted
since last the backup.
To perform a disaster recovery of an unclustered storage pool
1 Reinstall the software on your storage pool.
Perform the following procedure:
■ See “Reinstalling required software (unclustered recovery)” on page 123.

2 Perform one of the following procedures, depending on the way you backed
up your PureDisk environment.
■ See “Performing a disaster recovery of an unclustered PureDisk storage
pool from a NetBackup disaster recovery backup (NetBackup, unclustered
recovery)” on page 133.
■ See “Performing a disaster recovery from a Samba backup (Samba,
unclustered recovery)” on page 138.
■ See “Performing a disaster recovery from a third-party product backup
(third-party, unclustered recovery)” on page 147.

Reinstalling required software (unclustered recovery)


Complete the following procedure to reinstall the software on the nodes that
failed. For example, if a content router failed, install PDOS on that content router
only.
124 Disaster recovery for unclustered storage pools
Reinstalling required software (unclustered recovery)

To reinstall the software


1 Install PDOS on the nodes that failed.
Perform the following procedure:
See “Reinstalling PDOS” on page 124.
2 Reconfigure your storage partitions.
Perform one of the following procedures:
■ See “(Conditional) Reconfiguring the storage partitions on DAS/SAN disks”
on page 125.
■ See “(Conditional) Reconfiguring the storage partitions on iSCSI disks”
on page 129.

3 Complete the software reinstallation.


Perform the following procedure:
See “Completing the software reinstallation” on page 132.

Reinstalling PDOS
For each node that failed, install PDOS and any PDOS updates you installed onto
the PDOS base release. The following procedure explains how to reinstall PDOS.
To reinstall PDOS on the nodes that failed
1 Install PDOS on each failed node.
Use the installation instructions in the PureDisk Storage Pool Installation
Guide.
2 (Conditional) Install PDOS updates on each failed node.
Perform this step if you installed any PDOS updates on the nodes before the
disaster.
Use the installation instructions in the update README file.
3 (Conditional) Perform intermediate installation tasks.
Perform this step if the node has special requirements. For example, if you
need to disable multipathing or if you need to configure iSCSI disks for this
node, perform the additional steps in the chapter called 'Preparing to configure
the storage pool' in the PureDisk Storage Pool Configuration Guide.
4 Proceed to one of the following, depending on the disks attached to this node:
■ See “(Conditional) Reconfiguring the storage partitions on DAS/SAN disks”
on page 125.
Disaster recovery for unclustered storage pools 125
Reinstalling required software (unclustered recovery)

■ See “(Conditional) Reconfiguring the storage partitions on iSCSI disks”


on page 129.

(Conditional) Reconfiguring the storage partitions on DAS/SAN disks


Perform this procedure is your node uses DAS/SAN disks.
The following overview procedure explains how to configure the storage partitions.
To specify the storage partitions
1 Create disk groups.
See “Creating disk groups (DAS/SAN disks)” on page 125.
2 Configure the /Storage partition.
See “Configuring the /Storage partition (DAS/SAN disks)” on page 126.
3 (Optional) Configure the /Storage/data and /Storage/databases partition.
See “(Optional) Configuring the /Storage/data and the /Storage/databases
partition (DAS/SAN disks)” on page 127.
4 Complete the storage configuration.
See “Completing the storage configuration (DAS/SAN disks)” on page 128.

Creating disk groups (DAS/SAN disks)


The following procedure explains how to start YaST and create disk groups.
To create disk groups
1 Type the following command to launch the SUSE Linux YaST configuration
tool:

# yast

Type yast or YaST to start the interface. Do not type other combinations of
uppercase and lowercase letters.
2 In the YaST Control Center main page, select System > Partitioner.
3 Select Yes on the warning pop-up.
4 On the Expert Partitioner page, select VxVM.
5 On the Create a Disk Group pop-up, type your site-specific name for the disk
group or accept the default.
6 Click OK.
126 Disaster recovery for unclustered storage pools
Reinstalling required software (unclustered recovery)

7 On the Veritas Volume Manager: Disks Setup page, select a disk that you
want to include in the disk group.
8 Select Add Disk and press Return.
You can only add disks that are not yet partitioned. If you try to add a disk
with partitions, adding the disk to the disk group does not succeed. Delete all
partitions from the disk before you try to add partitions.
To delete all partitions on a disk, select Expert in the YaST interface and
select Delete Partition Table and Disk Label.
9 Repeat the following steps for all the disks that you want to include in the
disk group:
■ Step 7
■ Step 8

10 Click Next.
11 Proceed to the following topic:
See “Configuring the /Storage partition (DAS/SAN disks)” on page 126.

Configuring the /Storage partition (DAS/SAN disks)


The following procedure explains how to configure the /Storage partition.
To configure the /Storage partition
1 On the Veritas Volume Manager: Volumes page, click Add.
The Create Volume pop-up appears.
2 Decide whether you can create a VxFS file system or whether you need to
create an XFS file system .
A VxFS file system is the recommended default. However, you might need to
configure an XFS file system if VxFS does not support your disks. Proceed as
follows:
■ If your disks support VxFS, proceed to the next step to specify a volume
name.
■ If your disks do not support VxFS, in the File System field, select XFS
from the drop-down list.

3 In the Volume Name field, specify a name. For example, Storage.


4 In the Mount Point field, type /Storage. You must type this name because
it is not in the drop-down list.
5 Click OK.
Disaster recovery for unclustered storage pools 127
Reinstalling required software (unclustered recovery)

6 Click Next.
7 Proceed to one of the following topics:
■ If you want to configure a /Storage/data or a /Storage/databases
partition to enhance performance, proceed to the following:
See “(Optional) Configuring the /Storage/data and the /Storage/databases
partition (DAS/SAN disks)” on page 127.

Note: In a multinode storage pool, make sure to specify the same


partitioning scheme on each node.

■ If you do not want to configure additional partitions, proceed to the


following:
See “Completing the storage configuration (DAS/SAN disks)” on page 128.

(Optional) Configuring the /Storage/data and the


/Storage/databases partition (DAS/SAN disks)
Perform the following procedure if you want to configure a /Storage/data and
a /Storage/databases partition. Symantec does not require that you configure
these additional partitions, but these additional partitions can increase storage
pool performance.
If you configure these additional partitions, configure them in one of the following
ways:
■ Configure /Storage/data.
or
■ Configure /Storage/data and /Storage/databases.
To configure a /Storage/data and a /Storage/databases partition
1 Specify a partition for /Storage/data.
Perform the following steps:
■ On the Veritas Volume Manager: Volumes page, click Add.
The Create Volume pop-up appears.
■ (Conditional) In the File System field, select XFS from the drop-down list
if VxFS does not support your disk types. Make sure that you specify the
same file system for all storage partitions.
■ In the Volume Name field, specify Storage_data.
■ In the Size field, specify the size for this partition.
128 Disaster recovery for unclustered storage pools
Reinstalling required software (unclustered recovery)

For more information, see the PureDisk Storage Pool Installation Guide.
■ In the Mount Point field, type /Storage/data. You must type this name
because it is not in the drop-down list.
■ Click OK.

2 Specify a partition for /Storage/databases.


Perform the following steps:
■ On the Veritas Volume Manager: Volumes page, click Add.
The Create Volume pop-up appears.
■ (Conditional) In the File System field, select XFS from the drop-down list
if VxFS does not support your disk types. Make sure that you specify the
same file system for all storage partitions.
■ In the Volume Name field, specify Storage_databases.
■ In the Size field, specify the size for this partition.
For more information, see the PureDisk Storage Pool Installation Guide.
■ In the Mount Point field, type /Storage/databases. You must type this
name because it is not in the drop-down list.
■ Click OK.

3 Click Next.
4 Proceed to the following topic:
See “Completing the storage configuration (DAS/SAN disks)” on page 128.

Completing the storage configuration (DAS/SAN disks)


The following procedure explains how to finish the storage configuration.
To complete the storage configuration
1 On the Expert Partitioner page, inspect the information displayed.
Press the right-arrow key to display the Mount column. Make sure that the
Mount column is correct. If it is not, quit YaST and attempt the storage
configuration again.
2 Click the icon in the lower right-hand corner.
Depending on the installation option that you used, this icon is either Apply
or Finish.
3 On the Changes pop-up that appears, click Apply.
Disaster recovery for unclustered storage pools 129
Reinstalling required software (unclustered recovery)

4 Select Finish.
5 Select Quit.

(Conditional) Reconfiguring the storage partitions on iSCSI disks


Perform this procedure if your node uses iSCSI disks.
The following overview procedure explains how to configure the storage partitions.
To specify the storage partitions
1 Create disk groups.
See the following:
See “Creating disk groups (iSCSI disks)” on page 129.
2 Configure the /Storage partition.
See the following:
See “Configuring the /Storage partition (iSCSI disks)” on page 130.
3 (Optional) Configure the /Storage/data and /Storage/databases partition.
See the following:
See “(Optional) Configuring the /Storage/data and the /Storage/databases
partitions (iSCSI disks)” on page 131.
4 Complete the storage configuration.
See the following:
See “Completing the storage configuration (iSCSI disks)” on page 132.

Creating disk groups (iSCSI disks)


The following procedure explains how to start YaST and how to create disk groups.
To create disk groups
1 (Conditional) Type the following command to launch the SUSE Linux YaST
configuration tool:

# yast

Perform this step as needed.


Type yast or YaST to start the interface. Do not type other combinations of
uppercase and lowercase letters.
2 In the YaST Control Center main page, select System > Partitioner and press
Enter.
130 Disaster recovery for unclustered storage pools
Reinstalling required software (unclustered recovery)

3 Select Yes on the warning pop-up.


4 On the Expert Partitioner page, select LVM and press Enter.
5 On the Create a Volume Group pop-up, type your volume group name for
the disk group or accept the default.
6 Select OK and press Enter.
7 On the Logical Volume Manager: Physical Volume Setup page, select a disk
that you want to include in the disk group.
8 Select Add Volume and press Enter.
9 Repeat the following steps for all the disks that you want to include in the
disk group:
■ Step 7
■ Step 8

10 Select Next.
11 Proceed to the following topic:
See “Configuring the /Storage partition (iSCSI disks)” on page 130.

Configuring the /Storage partition (iSCSI disks)


The following procedure explains how to configure the /Storage partition.
To configure the /Storage partition
1 On the Logical Volume Manager: Logical Volumes page, select Add.
The Create Logical Volume pop-up appears.
2 In the File System field, select XFS from the drop-down list.
3 In the Logical Volume Name field, specify a name.
For example, Storage.
4 In the Size field, specify the size for this partition.
For information about how to configure the size for a partition, see the
PureDisk Storage Pool Installation Guide.
5 In the Mount Point field, type /Storage.
You must type this name because it is not in the drop-down list.
6 Select OK and press Enter.
7 Select Next.
8 Proceed to one of the following:
Disaster recovery for unclustered storage pools 131
Reinstalling required software (unclustered recovery)

■ If you want to configure the /Storage/data partition and the


/Storage/databases partition, proceed to the following:
See “(Optional) Configuring the /Storage/data and the /Storage/databases
partitions (iSCSI disks)” on page 131.
■ If you do not want to configure the /Storage/data partition and the
/Storage/databases partition, select Next, press Enter, and proceed to
the following:
See “Completing the storage configuration (iSCSI disks)” on page 132.

(Optional) Configuring the /Storage/data and the


/Storage/databases partitions (iSCSI disks)
Perform the following procedure if you want to configure a /Storage/data and
a /Storage/databases partition. Symantec does not require that you configure
these additional partitions, but these additional partitions can increase storage
pool performance.
If you configure these additional partitions, create them in one of the following
ways:
■ Create /Storage/data
or
■ Create both /Storage/data and /Storage/databases
To configure a /Storage/data and a /Storage/databases partition
1 Specify a partition for /Storage/data.
Perform the following steps to create a high-performance /Storage/data
partition:
■ On the Logical Volume Manager: Logical Volumes page, select Add.
The Create Volume pop-up appears.
■ In the File System field, select XFS from the drop-down list.
■ In the Logical Volume Name field, specify Storage_data.
■ In the Size field, specify the size for this partition.
For information about how to configure the size for a partition, see the
PureDisk Storage Pool Installation Guide.
■ In the Mount Point field, type /Storage/data.
You must type this name because it is not in the drop-down list.
■ Select OK.

2 Specify a partition for /Storage/databases.


132 Disaster recovery for unclustered storage pools
Reinstalling required software (unclustered recovery)

Perform the following steps to create a high-performance


/Storage/databases partition:

■ On the Logical Volume Manager: Logical Volumes page, select Add.


The Create Volume pop-up appears.
■ In the File System field, select XFS from the drop-down list.
■ In the Logical Volume Name field, specify Storage_databases.
■ In the Size field, specify the size for this partition.
For information about how to configure the size for a partition, see the
PureDisk Storage Pool Installation Guide.
■ In the Mount Point field, type /Storage/databases.
You must type this name because it is not in the drop-down list.
■ Select OK.

3 Select Next.
4 Proceed to the following:
See “Completing the storage configuration (iSCSI disks)” on page 132.

Completing the storage configuration (iSCSI disks)


The following procedure explains how to finish the storage configuration.
To complete the storage configuration
1 On the Expert Partitioner page, inspect the information displayed.
Press the right-arrow key to display the Mount column. Make sure that the
Mount column is correct. If it is not, quit YaST and attempt the storage
configuration again.
2 Select Apply and press Enter.
3 On the Changes pop-up that appears, select Finish and press Enter.
4 Select Quit and press Enter.

Completing the software reinstallation


The following procedure explains how to complete the process of reinstalling your
software on the failed nodes.
Disaster recovery for unclustered storage pools 133
Performing a disaster recovery of an unclustered PureDisk storage pool from a NetBackup disaster recovery backup
(NetBackup, unclustered recovery)

To complete the software reinstallation


1 (Conditional) Log in as root to the node that hosts the storage pool authority
service and install the latest DR_Restore_all.sh script.
Perform this step if you added PureDisk application patches or updates to
your storage pool before the disaster.
Type the following command:

# tar -C / -xf upgrade_tar_file ./opt/pdinstall/lib/DRRestoreAll.php

For upgrade_tar_file, specify the full path to the location of the latest update
or patch that your PureDisk environment was running. For example:

# tar -C / -xf /root/NB_PDE_6.6.1.17350.tar ./opt/pdistall/lib/DRRestoreAll.php

2 (Conditional) Install the NetBackup client software on all nodes that failed.
Perform this step if you write your disaster recovery backups to a NetBackup
environment.
3 Proceed to one of the following:
■ See “Performing a disaster recovery of an unclustered PureDisk storage
pool from a NetBackup disaster recovery backup (NetBackup, unclustered
recovery)” on page 133.
■ See “Performing a disaster recovery from a Samba backup (Samba,
unclustered recovery)” on page 138.
■ See “Performing a disaster recovery from a third-party product backup
(third-party, unclustered recovery)” on page 147.

Performing a disaster recovery of an unclustered


PureDisk storage pool from a NetBackup disaster
recovery backup (NetBackup, unclustered recovery)
The following procedure explains how to recover an unclustered PureDisk storage
pool that was backed with NetBackup.
134 Disaster recovery for unclustered storage pools
Performing a disaster recovery of an unclustered PureDisk storage pool from a NetBackup disaster recovery backup
(NetBackup, unclustered recovery)

To perform a disaster recovery from a NetBackup disaster recovery backup


1 (Conditional) Clean up after a failed full disaster recovery backup.
See “(Conditional) Cleaning up after a failed full disaster recovery backup
(NetBackup, unclustered recovery)” on page 134.
2 Use the DR_Restore_all script.
See “Using the DR_Restore_all script (NetBackup, unclustered recovery)”
on page 134.

(Conditional) Cleaning up after a failed full disaster recovery backup


(NetBackup, unclustered recovery)
Perform this procedure only if a previous full disaster recovery backup failed.
To expire the backup images from failed backups
◆ From the NetBackup interface, manually expire any NetBackup images that
are newer than the date of the last successful backup.
Symantec highly recommends that you purge any corrupted backup images
from any unsuccessful backups before you try the restore. You can search
the catalog and use the PureDisk server as the client name. Look for the
images that have the Standard and DataStore disaster recovery policies.
For information about how to search the catalog for images, see the NetBackup
Administrator’s Guide, Volume I.

Using the DR_Restore_all script (NetBackup, unclustered recovery)


The DR_Restore_all script initiates a dialog with you. Answer the questions that
the script displays.
To use the DR_Restore_all script
1 Make sure that all storage partitions are mounted.
For example, use the following mount(8) command to verify the mounts:

# mount | grep Storage

2 Run the disaster recovery script from the storage pool authority node.
From the PDOS command line, type the following command:

# /opt/pdinstall/DR_Restore_all.sh
Disaster recovery for unclustered storage pools 135
Performing a disaster recovery of an unclustered PureDisk storage pool from a NetBackup disaster recovery backup
(NetBackup, unclustered recovery)

3 Specify the method you used to back up PureDisk.


For example:

Please choose the method you used to do the Disaster Recovery


Backup
1. NetBackup
2. Samba Share
3. Local Directory
Backup Method (1|2|3): 1

4 Affirm whether you installed the NetBackup client on the node.


If you answer no, the script stops and you must restart the script after you
install the NetBackup client software.
For example:

Is the NetBackup client installed on all nodes and pointing to


the NBU Server that was used to do the backups? [Yn]: y

5 Provide the full path of any upgrade patch files that need to be applied.

Please provide the location of the upgrade patch tar file.


For multiple patches, enter in the order they should be applied.
(leave blank for none) :

For example:
/root/NB_PDE_6.5.1.17630.tar

Leave blank and press Enter if there are no patches to apply.


To apply multiple upgrade patches, provide the latest that can be installed
on top of the base version. Otherwise, provide the patch locations in the order
the patches should be applied. This situation is applicable to EEBs.
Next, respond to the following prompt:

Are there any more patches that need to be applied? (Yn) :

Answer yes (y) to apply additional patches. Answer no (n) to continue with
the disaster recovery process.
6 Respond to the prompts regarding the topology file.
Directory /Storage/etc must contain the following files:
136 Disaster recovery for unclustered storage pools
Performing a disaster recovery of an unclustered PureDisk storage pool from a NetBackup disaster recovery backup
(NetBackup, unclustered recovery)

■ topology.ini or topology.ini.enc

■ topology_nodes.ini

If these files are not present, the script retrieves them. If topology.ini.enc
is present, the script issues the following prompt for the password:

topology.ini file needs to be decrypted before proceeding


enter aes-256-cbc decryption password:

Type the password that you use for the storage pool configuration wizard.
7 Examine the topology information the script displays and specify the nodes
you want to restore.
The script reads the topology file and presents a display like the following
example:

STORAGE POOL TOPOLOGY


Node IP Address Services
---- ---------- ---------
1 10.80.62.1 spa mbe mbs cr nbu
2 10.80.62.2 cr
Node number(s): 1

The preceding example shows the topology that the script can restore in this
disaster recovery operation. Examine this information for accuracy and
specify the node numbers that you want to restore. If you want to restore
more than one node, use commas to separate the node numbers.
If a node did not fail and you want to preserve the data on that node, do not
specify that node number. The restore procedure completely reinstalls the
whole topology. However, for the nodes that did not fail, the script hides
everything in /Storage by unmounting the mount points before it removes
the data.
After the reinstall is complete, the script performs the following actions:
■ Remounts those mount points
■ Restores all the data on the failed nodes
■ Restores all the databases on all the nodes
■ Restores any removed data. A data removal job might have been run since
the last time the databases were backed up. For this reason, the script also
restores the removed data on the nodes that did not fail. This action
synchronizes the databases and data.
Disaster recovery for unclustered storage pools 137
Performing a disaster recovery of an unclustered PureDisk storage pool from a NetBackup disaster recovery backup
(NetBackup, unclustered recovery)

8 Respond to queries from the script regarding passwords.


The restore process can take hours to complete. As the script runs, you might
be asked to specify the system passwords for the remote (or local) nodes. This
prompt occurs early in the process during secure shell (SSH) authentication.
9 (Conditional) Respond to the queries that are displayed by the upgrade
patches.
Refer to the README files that came with the upgrade patches for specific
details about the queries.
Respond no to creating jobs for upgrading agents.
At the end of the upgrade installation, the script prompts you to encrypt the
topology.ini file. Answer no at this time to continue. You will have a chance
to encrypt the topology.ini file at the end of the disaster recovery process.
10 When the restore is complete, answer the prompts about encryption of the
topology.ini file.

For example:

Would you like to encrypt the topology.ini file? [Yn]:y


Encrypting /opt/pdinstall/topology.ini
enter aes-256-cbc encryption password: xxxx
Verifying - enter aes-256-cbc encryption password:

Type the password that you use for the storage pool configuration wizard.
11 Observe the completion message.
When the operation completes successfully, the script displays the following
message:

Disaster recovery complete

12 (Conditional) Run the following script on the storage pool authority node to
upgrade the security protocol:

# /opt/pdinstall/disable_sslv2.sh

Perform this step if you ran the disable_sslv2.sh script on this storage pool
at any time. The disaster recovery restore does not enable this script
automatically.
Symantec recommends that you run the script unless PureDisk 6.5.x storage
pools need to replicate to this storage pool.
138 Disaster recovery for unclustered storage pools
Performing a disaster recovery from a Samba backup (Samba, unclustered recovery)

13 Perform a full disaster recovery backup.


Make sure you perform a full disaster recovery backup before you perform
any file backups or perform any incremental disaster recovery backups.
14 (Conditional) Re-enable the NetBackup export engine on any nodes that hosted
only a NetBackup export engine service.
Perform this step only if you have a node that hosted only a NetBackup export
engine service.
For information about how to enable a NetBackup export engine, see the
following:
See “About exporting data to NetBackup” on page 73.

Performing a disaster recovery from a Samba backup


(Samba, unclustered recovery)
The following procedure explains how to recover an unclustered PureDisk storage
pool that was backed up to a Samba backup.
To perform a disaster recovery from a Samba backup
1 (Conditional) Recreate your topology information.
See “(Conditional) Recreate your topology information (Samba, unclustered
recovery)” on page 138.
2 (Conditional) Remove corrupted files from an incomplete backup.
See “(Conditional) Removing corrupted files from an incomplete backup
(Samba, unclustered recovery)” on page 140.
3 (Conditional) Prepare the storage pool authority node for disaster recovery.
See “(Conditional) Preparing the storage pool authority node for disaster
recovery (Samba, unclustered recovery)” on page 141.
4 Use the DR_Restore_all script.
See “Using the DR_Restore_all script (Samba, unclustered recovery)”
on page 141.

(Conditional) Recreate your topology information (Samba, unclustered


recovery)
Perform one of the procedures in this section if you enabled encryption for your
disaster recovery backups.
Disaster recovery for unclustered storage pools 139
Performing a disaster recovery from a Samba backup (Samba, unclustered recovery)

You need your storage pool's topology information in order to perform the restore.
Perform one of the following procedures:
■ See “(Conditional) Recreating the topology with current topology information
(Samba, unclustered recovery)” on page 139.
■ See “(Conditional) Recreating the topology without current topology
information (Samba, unclustered recovery)” on page 139.

(Conditional) Recreating the topology with current topology


information (Samba, unclustered recovery)
Perform the following procedure if all of the following conditions are true:
■ You enabled encryption in the PureDisk disaster recovery backup policy.
■ You have a backup copy of this storage pool’s topology and you can recreate
it.
To recreate the topology when you have the storage pool’s topology information
◆ Enter the following command and follow the prompts to recreate this storage
pool’s topology:

# /opt/pdinstall/edit_topology.sh

Your goal is to recreate the PureDisk topology so that it matches the topology
that existed before the disaster.
For information about the storage pool’s topology and node identification
information, see the worksheets that you completed during this storage pool’s
installation.

(Conditional) Recreating the topology without current topology


information (Samba, unclustered recovery)
Perform the following procedure if all of the following conditions are true:
■ You enabled encryption in the PureDisk disaster recovery backup policy.
■ You do not have a backup copy of this storage pool’s topology.
140 Disaster recovery for unclustered storage pools
Performing a disaster recovery from a Samba backup (Samba, unclustered recovery)

To recreate the topology without current topology information


1 Enter the following command and follow the prompts to recreate this storage
pool’s topology:

# /opt/pdinstall/edit_topology.sh

Include only the storage pool authority in the topology.

Note: Make sure you enter the correct storage pool ID. Make sure that all
passwords you use during the disaster recovery process are the same as those
that existed before the disaster occurred.

2 Enter the following command to install the new storage pool:

# /opt/pdinstall/install_newStoragePool.sh

3 Remove the following files:

/Storage/etc/topology.ini
/Storage/etc/topology_nodes.ini

The preceding files are not needed at this time. The disaster recovery script
restores these files from the backup.

(Conditional) Removing corrupted files from an incomplete backup


(Samba, unclustered recovery)
Perform this procedure only if a previous full disaster recovery backup failed.
The PureDisk disaster recovery full backup scripts create a previous directory
at the root level of the share that contains the previous backup data. PureDisk
creates this directory at the start of the full backup and removes it at the end of
a successful backup.
If the backup fails, the previous directory still exists on the share. If this directory
exists at the start of another backup run, PureDisk preserves the contents. It
deletes the current backup data (from the failed backup) before it starts the new
backup. PureDisk removes the previous directory only when a new full backup
is successful.
If you need to do a disaster restore and the previous directory exists, move the
contents of this directory to the share’s root level. Then remove the previous
directory. The contents currently under the share are from a failed disaster
recovery backup because the previous directory exists.
Disaster recovery for unclustered storage pools 141
Performing a disaster recovery from a Samba backup (Samba, unclustered recovery)

To remove corrupted files from an incomplete backup


1 Search for a directory that is called previous in the root level of the share.
The directory contains the previous backup data.
2 Move the contents of the previous directory to the root level of the share.
3 Remove the directory called previous.

(Conditional) Preparing the storage pool authority node for disaster


recovery (Samba, unclustered recovery)
Perform this procedure if the following statements are true:
■ You are NOT restoring the storage pool authority node of your PureDisk
environment.
■ The PureDisk version you are restoring requires upgrade patches.
To prepare the storage pool authority node for disaster recovery
1 Log on to the storage pool authority node.
2 Change to the /opt/pdconfigure/var/nodesoftware directory.
3 Remove all files in this directory with the name patch-*.tgz.
You should now only have one file left with the name
puredisk-base_version.tgz, where base_version is the base version of your
PureDisk environment. If you were running 6.6.X.X, the file would be
puredisk-6.6.0.10534.tgz.

Using the DR_Restore_all script (Samba, unclustered recovery)


The DR_Restore_all script initiates a dialog with you. You need to answer the
questions that the script displays.
To use the DR_Restore_all script
1 Make sure that all storage partitions are mounted.
For example, use the following mount(8) command to verify the mounts:

# mount | grep Storage

2 Run the disaster recovery script from the storage pool authority node.
From the PDOS command line, type the following command:

# /opt/pdinstall/DR_Restore_all.sh
142 Disaster recovery for unclustered storage pools
Performing a disaster recovery from a Samba backup (Samba, unclustered recovery)

3 Specify the method you used to back up PureDisk.


For example:

Please choose the method you used to do the Disaster Recovery


Backup
1. NetBackup
2. Samba Share
3. Local Directory
Backup Method (1|2|3): 2

4 Provide the information that PureDisk needs to mount the shared file system.
For example:

Please enter remote samba share (i.e.


//11.88.77.33/remoteSambaShare):
//rmns1.min.boston.com/PD_DRdata

5 Provide information about the local mount point.


The local mount point is the path name of a directory on which to mount the
share. If you used the DR_Restore_all script in the default PureDisk location,
the mount point must be /DRdata.

Please enter local mount point (default: /DRdata):

If appropriate, press Enter to accept the default.


6 Provide authentication information.
This information includes the user name, password, and work group for the
Samba share on the remote file server.
For example:

Please enter samba user name: pduser


Please enter samba password: pdpwd
Please enter samba workgroup: pdwgroup
Please enter location to restore CR data and spool area from
(default: /DRdata):
Disaster recovery for unclustered storage pools 143
Performing a disaster recovery from a Samba backup (Samba, unclustered recovery)

7 Type the full system path name to the disaster recovery script used to save
your PureDisk data.
If you used the DR_Restore_all script in the default PureDisk location, press
return.
If you supplied your own restore script, PureDisk does not protect it. The
scripts are overwritten during a restore procedure if they remain in the default
installation directory (/opt). You must write them to another directory for
protection (for example, in /usr or /tmp).
For example:

Please enter full path of customized DR restore script (default:


/opt/pdconfigure/scripts/support/DR_BackupSampleScripts/DRresto
re.sh):

8 Respond to the prompts regarding encryption.


If you used a pdkeyutil password to enable encryption of your disaster
recovery data during backup, supply the password that you specified.
If you did not use pdkeyutil, type no; the script does not ask you for a
password.
For example:

Was encryption used to do the Disaster Recovery Backup [Yn]: y


Please provide the pdkeyutil pass phrase: *******
144 Disaster recovery for unclustered storage pools
Performing a disaster recovery from a Samba backup (Samba, unclustered recovery)

9 Provide the full path of any upgrade patch files that need to be applied.

Please provide the location of the upgrade patch tar file.


For multiple patches, enter in the order they should be applied.
(leave blank for none) :

For example:
/root/NB_PDE_6.6.1.17630.tar

Leave blank and press Enter if there are no patches to apply.


To apply multiple upgrade patches, provide the latest that can be installed
on top of the base version. Otherwise, provide the patch locations in the order
the patches should be applied. This situation is applicable to EEBs.
Next, respond to the following prompt:

Are there any more patches that need to be applied? (Yn) :

Answer yes (y) to apply additional patches. Answer no (n) to continue with
the disaster recovery process.
10 Type the storage pool ID for the storage pool that you want to restore.
This ID is the value specified for the storagepoolid property in the
topology.ini file. This value is used to retrieve the topology file.

For example:

Please enter Storage Pool ID:

11 Respond to the prompts regarding the topology file.


Directory /Storage/etc must contain the following files:
■ topology.ini or topology.ini.enc

■ topology_nodes.ini

If these files are not present, the script retrieves them. If topology.ini.enc
is present, the script issues the following prompt for the password:

topology.ini file needs to be decrypted before proceeding


enter aes-256-cbc decryption password:

Type the password that you use for the storage pool configuration wizard.
12 Examine the topology information the script displays and specify the nodes
you want to restore.
The script reads the topology file and presents a display like the following
example:
Disaster recovery for unclustered storage pools 145
Performing a disaster recovery from a Samba backup (Samba, unclustered recovery)

STORAGE POOL TOPOLOGY


Node IP Address Services
---- ---------- ---------
1 10.80.62.1 spa mbe mbs cr nbu
2 10.80.62.2 cr
Node number(s): 1

The preceding example shows the topology that the script can restore in this
disaster recovery operation. Examine this information for accuracy and
specify the node numbers that you want to restore. If you want to restore
more than one node, use commas to separate the node numbers.
If a node did not fail and you want to preserve the data on that node, do not
specify that node number. The restore procedure completely reinstalls the
whole topology. However, for the nodes that did not fail, the script hides
everything in /Storage by unmounting the mount points before it removes
the data.
After the reinstall is complete, the script performs the following actions:
■ Remounts those mount points
■ Restores all the data on the failed nodes
■ Restores all the databases on all the nodes
■ Restores the removed data. A data removal job might have been run since
the last time the databases were backed up. For this reason, the script also
restores the removed data on the nodes that did not fail. This method
synchronizes the databases and data.

13 Respond to queries from the script regarding passwords.


The restore process can take hours to complete. As the process runs, you
might be asked to specify the system passwords for the remote (or local)
nodes. This prompt occurs early in the process during SSH authentication.
14 (Conditional) Respond to queries that are displayed by the upgrade patches.
Refer to the README files that came with the upgrade patches for specific
details about the queries.
Respond no to creating jobs for upgrading agents.
At the end of the upgrade installation, the script prompts you to encrypt the
topology.ini file. Answer no at this time to continue. You have another
chance to encrypt the topology.ini file at the end of the disaster recovery
process.
146 Disaster recovery for unclustered storage pools
Performing a disaster recovery from a Samba backup (Samba, unclustered recovery)

15 When the restore is complete, answer the prompts about encryption of the
topology.ini file.

For example:

Would you like to encrypt the topology.ini file? [Yn]:y


Encrypting /opt/pdinstall/topology.ini
enter aes-256-cbc encryption password: xxxx
Verifying - enter aes-256-cbc encryption password:

Type the password that you use for the storage pool configuration wizard.
16 Observe the completion message.
When the operation completes successfully, the script displays the following
message:

Disaster recovery complete

17 (Conditional) Run the following script on the storage pool authority node to
upgrade the security protocol:

# /opt/pdinstall/disable_sslv2.sh

Perform this step if you ran the disable_sslv2.sh script on this storage pool
at any time. The disaster recovery restore does not enable this script
automatically.
Symantec recommends that you run the script unless PureDisk 6.5.x storage
pools need to replicate to this storage pool.
18 Perform a full disaster recovery backup.
Make sure you perform a full disaster recovery backup before you perform
any file backups or perform any incremental disaster recovery backups.
19 (Conditional) Re-enable the NetBackup export engine on any nodes that hosted
only a NetBackup export engine service.
Perform this step only if you have a node that hosted only a NetBackup export
engine service.
For information about how to enable a NetBackup export engine, see the
following:
See “About exporting data to NetBackup” on page 73.
Disaster recovery for unclustered storage pools 147
Performing a disaster recovery from a third-party product backup (third-party, unclustered recovery)

Performing a disaster recovery from a third-party


product backup (third-party, unclustered recovery)
The following procedure explains how to recover an unclustered PureDisk storage
pool that was backed up to a third-party product.
To perform a disaster recovery from a third-party product
1 (Conditional) Recreate your topology information.
See “(Conditional) Recreate your topology information (third-party,
unclustered recovery)” on page 147.
2 (Conditional) Remove corrupted files from an incomplete full disaster recovery
backup.
See “(Conditional) Removing corrupted files from an incomplete full disaster
recovery backup (third-party, unclustered recovery)” on page 149.
3 (Conditional) Prepare the storage pool authority node for disaster recovery.
See “(Conditional) Preparing the storage pool authority node for disaster
recovery (third-party, unclustered recovery)” on page 150.
4 Run the DR_Restore_all script.
See “Using the DR_Restore_all script (third-party, unclustered recovery)”
on page 150.

(Conditional) Recreate your topology information (third-party,


unclustered recovery)
Perform one of the following procedures if you enabled encryption for your disaster
recovery backups.
You need your storage pool's topology information in order to perform the restore.
Perform one of the following procedures:
■ See “(Conditional) Recreating the topology with current topology information
(third-party, unclustered recovery)” on page 147.
■ See “(Conditional) Recreating the topology without current topology
information (third-party, unclustered recovery)” on page 148.

(Conditional) Recreating the topology with current topology


information (third-party, unclustered recovery)
Perform the following procedure if all of the following conditions are true:
■ You enabled encryption in the PureDisk disaster recovery backup policy.
148 Disaster recovery for unclustered storage pools
Performing a disaster recovery from a third-party product backup (third-party, unclustered recovery)

■ You have a backup copy of this storage pool’s topology and you can recreate
it.
To recreate the topology when you have the storage pool’s topology information
◆ Enter the following command and follow the prompts to recreate this storage
pool’s topology:

# /opt/pdinstall/edit_topology.sh

Your goal is to recreate the PureDisk topology so that it matches the topology
that existed before the disaster.
For information about the storage pool’s topology and node identification
information, see the worksheets that you completed during this storage pool’s
installation.

(Conditional) Recreating the topology without current topology


information (third-party, unclustered recovery)
Perform the following procedure if all of the following conditions are true:
■ You enabled encryption in the PureDisk disaster recovery backup policy.
■ You do not have a backup copy of this storage pool’s topology.
Disaster recovery for unclustered storage pools 149
Performing a disaster recovery from a third-party product backup (third-party, unclustered recovery)

To recreate the topology without current topology information


1 Enter the following command and follow the prompts to recreate this storage
pool’s topology:

# /opt/pdinstall/edit_topology.sh

Include only the storage pool authority in the topology.

Note: Make sure you enter the correct storage pool ID. Make sure that all
passwords you use during the disaster recovery process are the same as those
that existed before the disaster occurred.

2 Enter the following command to install the new storage pool:

# /opt/pdinstall/install_newStoragePool.sh

3 Remove the following files:

/Storage/etc/topology.ini
/Storage/etc/topology_nodes.ini

The preceding files are not needed at this time. The disaster recovery script
restores these files from the backup.

(Conditional) Removing corrupted files from an incomplete full disaster


recovery backup (third-party, unclustered recovery)
Symantec recommends that you follow some procedure at your site to ensure that
the complete contents of a disaster recovery backup are preserved. The
preservation ensures consistent backup data.
For example, assume that you perform a full disaster recovery backup each
Monday. On the first Monday of the month, the backup runs without problems.
On the second Monday of the month, the disaster recovery backup removes the
contents of the first Monday’s backup. However, the second full disaster recovery
backup fails. This failure leaves you without any consistent backup data.
See “(Conditional) Removing corrupted files from an incomplete backup (Samba,
unclustered recovery)” on page 140.
150 Disaster recovery for unclustered storage pools
Performing a disaster recovery from a third-party product backup (third-party, unclustered recovery)

To remove corrupted files from an incomplete full disaster recovery backup


1 Examine your backup repository.
2 Locate the last successful backup.
3 Remove any subsequent failed backups from the system.

(Conditional) Preparing the storage pool authority node for disaster


recovery (third-party, unclustered recovery)
Perform this procedure if the following statements are true:
■ You are NOT restoring the storage pool authority node of your PureDisk
environment.
■ The PureDisk version you are restoring requires upgrade patches.
To prepare the storage pool authority node for disaster recovery
1 Log on to the storage pool authority node.
2 Change to the /opt/pdconfigure/var/nodesoftware directory.
3 Remove all files in this directory with the name patch-*.tgz.
You should now only have one file left with the name
puredisk-base_version.tgz, where base_version is the base version of your
PureDisk environment. If you were running 6.6.X.X, the file could be
puredisk-6.6.0.10534.tgz.

Using the DR_Restore_all script (third-party, unclustered recovery)


The following procedure explains how to use the DR_Restore_all script.
To use the DR_Restore_all script
1 Make sure that all storage partitions are mounted.
For example, use the following mount(8) command to verify the mounts:

# mount | grep Storage

2 Run the disaster recovery script from the storage pool authority node.
From the PDOS command line, type the following command:

# /opt/pdinstall/DR_Restore_all.sh
Disaster recovery for unclustered storage pools 151
Performing a disaster recovery from a third-party product backup (third-party, unclustered recovery)

3 Specify the method you used to back up PureDisk.


For example:

Please choose the method you used to do the Disaster Recovery


Backup
1. NetBackup
2. Samba Share
3. Local Directory
Backup Method (1|2|3): 3

4 Provide information about where the data to be restored is located.


Press Enter to accept the defaults for the following prompts:

Please enter location to restore metadata from (default:


/DRdata):
Please enter location to restore CR data and spool area from (default:
/DRdata):

5 Type the full system path name to the disaster recovery script used to save
your PureDisk data.
If you used the DR_Restore_all script in the default PureDisk location, press
return.
If you supplied your own restore script, remember that the scripts are not
protected. The scripts are overwritten during a restore procedure if they
remain in the default installation directory (/opt). To prevent this problem,
you must place them in another directory for protection, such as in /usr or
/tmp.

For example:

Please enter full path of customized DR restore script (default:


/opt/pdconfigure/scripts/support/DR_BackupSampleScripts/DRresto
re.sh):

6 Respond to the prompts regarding encryption.


If you used a pdkeyutil password to enable encryption of your disaster
recovery data during backup, supply the password that you specified.
If you did not use pdkeyutil, type no; the script does not ask you for a
password.

Was encryption used to do the Disaster Recovery Backup [Yn]: y


Please provide the pdkeyutil pass phrase: *******
152 Disaster recovery for unclustered storage pools
Performing a disaster recovery from a third-party product backup (third-party, unclustered recovery)

7 Provide the full path of any upgrade patch files that need to be applied.

Please provide the location of the upgrade patch tar file.


For multiple patches, enter in the order they should be applied.
(leave blank for none) :

For example:
/root/NB_PDE_6.6.1.17630.tar

Leave blank and press Enter if there are no patches to apply.


To apply multiple upgrade patches, provide the latest that can be installed
on top of the base version. Otherwise, provide the patch locations in the order
the patches should be applied. This situation is applicable to EEBs.
Next, respond to the following prompt:

Are there any more patches that need to be applied? (Yn) :

Answer yes (y) to apply additional patches. Answer no (n) to continue with
the disaster recovery process.
8 Type the storage pool ID for the storage pool that you want to restore.
This name is the value specified for the storagepoolid property in the
topology.ini file. This value is used to retrieve the topology file.

For example:

Please enter Storage Pool ID:

9 Respond to the prompts regarding the topology file.


Directory /Storage/etc must contain the following files:
■ topology.ini or topology.ini.enc

■ topology_nodes.ini

If these files are not present, the script retrieves them. If topology.ini.enc
is present, the script issues the following prompt for the password:

topology.ini file needs to be decrypted before proceeding


enter aes-256-cbc decryption password:

Type the password that you use for the storage pool configuration wizard.
10 Examine the topology information the script displays and specify the nodes
you want to restore.
Disaster recovery for unclustered storage pools 153
Performing a disaster recovery from a third-party product backup (third-party, unclustered recovery)

The script reads the topology file and presents a display like the following
example:

STORAGE POOL TOPOLOGY


Node IP Address Services
---- ---------- ---------
1 10.80.62.1 spa mbe mbs cr nbu
2 10.80.62.2 cr
Node number(s): 1

The preceding example shows the topology that the script can restore in this
disaster recovery operation. Examine this information for accuracy and
specify the node numbers that you want to restore. If you want to restore
more than one node, use commas to separate the node numbers.
If a node did not fail and you want to preserve the data on that node, do not
specify that node number. The restore procedure completely reinstalls the
whole topology. However, for the nodes that did not fail, the script hides
everything in /Storage by unmounting the mount points before it removes
the data.
After the reinstall is complete, the script performs the following actions:
■ Remounts those mount points
■ Restores all the data on the failed nodes
■ Restores all the databases on all the nodes
■ Restores the removed data. A data removal job might have been run since
the last time the databases were backed up. In that case, the script also
restores the removed data on the nodes that did not fail. This method
synchronizes the databases and data.

11 Respond to queries from the script regarding passwords.


The restore process can take hours to complete. As the script runs, you might
be asked to specify the system passwords for the remote (or local) nodes. This
prompt occurs early in the process during SSH authentication.
154 Disaster recovery for unclustered storage pools
Performing a disaster recovery from a third-party product backup (third-party, unclustered recovery)

12 (Conditional) Respond to the queries that are displayed by the upgrade


patches.
Refer to the README files that came with the upgrade patches for specific
details about the queries.
Respond no to creating jobs for upgrading agents.
At the end of the upgrade installation, the script prompts you to encrypt the
topology.ini file. Answer no at this time to continue. You have a chance to
encrypt the topology.ini file at the end of the disaster recovery process.
13 When the restore is complete, answer the prompts about encryption of the
topology.ini file.

For example:

Would you like to encrypt the topology.ini file? [Yn]:y


Encrypting /opt/pdinstall/topology.ini
enter aes-256-cbc encryption password: xxxx
Verifying - enter aes-256-cbc encryption password:

Type the password that you use for the storage pool configuration wizard.
14 Observe the completion message.
When the operation completes successfully, the script displays the following
message:

Disaster recovery complete

15 (Conditional) Run the following script on the storage pool authority node to
upgrade the security protocol:

# /opt/pdinstall/disable_sslv2.sh

Perform this step if you ran the disable_sslv2.sh script on this storage pool
at any time. The disaster recovery restore does not enable this script
automatically.
Symantec recommends that you run the script unless PureDisk 6.5.x storage
pools need to replicate to this storage pool.
Disaster recovery for unclustered storage pools 155
Performing a disaster recovery from a third-party product backup (third-party, unclustered recovery)

16 Perform a full disaster recovery backup.


Make sure you perform a full disaster recovery backup before you perform
any file backups or perform any incremental disaster recovery backups.
17 (Conditional) Re-enable the NetBackup export engine on any nodes that hosted
only a NetBackup export engine service.
Perform this step only if you have a node that hosted only a NetBackup export
engine service.
For information about how to enable a NetBackup export engine, see the
following:
See “About exporting data to NetBackup” on page 73.
156 Disaster recovery for unclustered storage pools
Performing a disaster recovery from a third-party product backup (third-party, unclustered recovery)
Chapter 7
Disaster recovery for
clustered storage pools
This chapter includes the following topics:

■ About restoring a clustered PureDisk environment

■ Recovering from a single-node failover

■ Recovering one active node

■ Recovering from a data storage corruption

■ Recovering from a complete storage pool disaster (clustered, complete storage


pool disaster)

■ (Conditional) Cleaning up after a failed full disaster recovery backup

■ (Conditional) Recreate your topology information

■ Running the DR_Restore_all_script to recover the data

About restoring a clustered PureDisk environment


Perform the procedures in this chapter when other methods to recover data have
failed. No matter how frequently you have performed disaster recovery backups,
data loss is possible with any restore procedure. When a clustered storage pool
experiences a disaster, determine the type of disaster the storage pool experienced.
Clustered storage pool disasters can take on one of the following forms:
■ Single-node failover. In this scenario, only one node has failed. The storage
pool is still functioning, but you need to recover the failed node so you can put
it back into service.
158 Disaster recovery for clustered storage pools
About restoring a clustered PureDisk environment

■ One-node or multiple-node failure. In this scenario, at least one node in a


cluster has failed, but other nodes in the cluster are still running. In this
scenario, the storage pool has not experienced a complete disaster because
some active nodes are still running. The disaster recovery procedure explains
how to recover one node. Repeat the procedure on other nodes if more than
one node has failed.
■ Data storage corruption. In this scenario, the shared disks that host /Storage,
/Storage/data, and/or /Storage/databases are corrupt.

■ Complete disaster. In this scenario, all or most of the storage pool has
experienced a disaster such as a computer-room flood or fire. You need to
recover multiple nodes.
To perform a disaster recovery of a clustered storage pool
1 Prepare the failed nodes for recovery.
In most cases, no matter what kind of disaster occurred, you need to prepare
the nodes before you run the disaster recovery script (DR_Restore_all.sh).
Perform one of the following procedures, depending on type of disaster that
occurred:
■ See “Recovering from a single-node failover” on page 159.
■ See “Recovering one active node” on page 160.
■ See “Recovering from a data storage corruption” on page 166.
■ See “Recovering from a complete storage pool disaster (clustered, complete
storage pool disaster)” on page 169.

2 (Conditional) Clean up after a failed full disaster recovery backup.


Perform the following procedure if a previous full disaster recovery backup
failed:
See “(Conditional) Cleaning up after a failed full disaster recovery backup”
on page 172.
Disaster recovery for clustered storage pools 159
Recovering from a single-node failover

3 (Conditional) Recreate your topology.


Perform this step if you enable encryption during your disaster recovery
backups and if you backed up your storage pool to a Samba shared file system
or to a third-party product.
See “(Conditional) Recreate your topology information” on page 174.
4 Run the DR_Restore_all.sh script.
The script prompts you to specify information about the storage pool. The
script's questions differ depending on the disaster recovery backup method
you used.
See “Running the DR_Restore_all_script to recover the data” on page 175.
For information about how to perform disaster recovery backups and how to
create PureDisk disaster recovery backup policies, see the following:
■ See “About performing disaster recovery backups” on page 100.
■ See “Enabling NetBackup for PureDisk backups” on page 104.
Some of the clustered disaster recovery procedures use PureDisk 6.5.x installation
tools. The disaster recovery procedures assume some familiarity with these tools.
For more information about these tools, see the PureDisk 6.5.1 version of the
following manual:
PureDisk Storage Pool Installation Guide.

Recovering from a single-node failover


The Veritas Cluster Server (VCS) software ensures that the services on a failing
node migrate smoothly to one of the passive nodes automatically. Such situations
include solving node-specific hardware problems, general node maintenance, and
so on. Alternately, an administrator can manually transfer services from an active
node to a passive node from the Cluster Manager Java console.
The following procedure explains how to recover the failed node and return it to
service.
160 Disaster recovery for clustered storage pools
Recovering one active node

To recover a failed node


1 Use the following procedure to recover the failed node and return it to service
as a passive node:
See “Adding a new passive node to a cluster” on page 286.
2 (Conditional) Reinstall the NetBackup client software on all failed nodes.
Perform this step if a NetBackup client was installed on the failed nodes before
the disaster.
If you back up your storage pool to NetBackup or if you use the NetBackup
export engine, install the NetBackup client on the failed nodes now. For
information about how to install a NetBackup client, see your NetBackup
documentation.

Recovering one active node


If you have several nodes in a clustered storage pool, it is possible for only one,
two, or several active nodes to fail. In this scenario, at least one node in a cluster
has failed, but the storage pool has not experienced a complete disaster because
some active nodes are still running. The following procedure explains how to
recover a single active node. If more than one active node failed, repeat this
procedure for each active node that failed.
To recover one active node
1 Reinstall the PDOS software and the VCS software.
See “Reinstalling the PDOS software and the VCS software” on page 160.
2 Recreate the disks and volumes.
See “Recreating disks and volumes” on page 162.
3 Run the DR_Restore_all.sh script.
See “Running the DR_Restore_all script” on page 165.

Reinstalling the PDOS software and the VCS software


The following procedure explains how to reinstall PDOS and VCS on a failed node.
Disaster recovery for clustered storage pools 161
Recovering one active node

To reinstall PDOS and VCS on one active node


1 Locate this storage pool's cluster planning spreadsheet.
When the storage pool was installed initially, the person who performed the
installation should have completed the PureDisk cluster planning spreadsheet.
The spreadsheet is in a file named PureDisk_ClusterPlanning.xls. It can
be easier to restore a clustered storage pool if you have access to the
information that is on the spreadsheet.
If you do not have completed cluster planning spreadsheet from this storage
pool's initial installation, you can get a blank one from the following Web
site:
http://www.symantec.com/business/support/documentation.jsp?pid=52672
Alternatively, see file
/opt/pdweb/htdocs/documentation/PureDisk_ClusterPlanning.xls for
a new copy of this file.
2 Use the Cluster Manager Java Console to offline all service groups from all
the nodes.
From the Cluster Manager Java Console, right-click the cluster group, and
select Offline > All Systems.
3 Remove all service groups from all the nodes.
Right-click each group and select delete to prevent VCS from taking any
action while you recover the PureDisk storage pool.
4 Reinstall PDOS on the failed nodes.
Use the PDOS installation information in the following manual:
PureDisk Storage Pool Installation Guide.
On the PDOS main menu, you can select either Install (to use the installation
wizard) or you can select Expert (to perform the installation manually).
On each failed node, perform additional configuration steps as needed for
your environment. For example, make sure to consult the chapter called
"Preparing to configure the storage pool" in the installation guide.
5 On each failed node, install the clustering software.
The following explains how to install VCS manually on PDOS 6.6:
See “About the Veritas Cluster Server (VCS) software installation” on page 341.
If you are familiar with the typical VCS installation procedure, be aware of
the following differences:
162 Disaster recovery for clustered storage pools
Recovering one active node

■ When the VCS installer prompts you to specify the nodes on which to
install the software, specify only the failed nodes. Do not install VCS on
the nodes that do not need to be recovered.
■ If you install VCS on only one node, the VCS installer issues a warning.
The warning asks you to confirm that you want to install only a single-node
cluster. Answer y.
■ At the end of the VCS 4.1 MP3 installation, the VCS installer asks you to
specify whether you are ready to configure VCS. Answer n.

6 For each node, type the following command to configure a service address
on the public NIC in the node:

# ip a a ip_address dev ethn

For ip_address, specify the service IP address of the service you want to
configure. For n, specify the number of the public network interface card
(NIC) on this node.
For example, on node1.acme.com, you could type the following command:

# ip a a 100.100.100.101 dev eth1

Note: Make sure to repeat this step on each active node, including the healthy
nodes. Because you removed the service groups for the entire storage pool,
you need to recreate the service addresses for each node at this time.

7 Proceed to the following:


See “Recreating disks and volumes” on page 162.

Recreating disks and volumes


The following procedure explains how to recreate the disks, recreate the disk
volumes, and mount the disk volumes.
To recreate the disks and volumes
1 Reinitialize the node's disk volumes on all nodes, even those that did not fail.
Perform the following steps to import all nodes' disk volumes on all storage
pool nodes:
■ Import the disk group that is associated with this node:

# vxdg import disk_group_name


Disaster recovery for clustered storage pools 163
Recovering one active node

For more information about this command, see the Veritas Storage
Foundation documentation.
■ On each active node that you want to restore, type the following command
to start that node's disk volumes:

# vxvol -g disk_group_name startall

For more information about this command, see the Veritas Storage
Foundation documentation.
■ Repeat the preceding steps on all nodes. Make sure to import and start
the disk volumes on all nodes in the storage pool, including those that did
not fail.

2 (Conditional) Type the following command to create the /Storage directory:


Perform this step on the failed node if the disks attached to the node crashed
or if the /Storage directory does not exist.

# mkdir /Storage

3 (Conditional) Issue the mount(8) command to mount /Storage on this node.


Perform this step if you created the /Storage directory in the previous step.
This command has the following format:

mount -t vxfs /dev/vx/dsk/disk_group_name/volume_name /Storage

For disk_group_name and volume_name, specify the full hardware path to


the /Storage volume that you created on this node.
For example:

# mount -t vxfs /dev/vx/dsk/dg1/disk1 /Storage

As you mount /Storage on each node, make sure that the mount attaches to
a different disk for each PureDisk node. Connect each node to a different
LUN.
4 (Conditional) Remove the existing topology files.
Perform this step if the node you want to recover is the storage pool authority
node.
Remove the following files:
■ topology.ini or topology.ini.enc
164 Disaster recovery for clustered storage pools
Recovering one active node

■ topology_nodes.ini

5 (Conditional) Create the /Storage/data directory.


Perform this step if a /Storage/data partition existed on this node before
the disaster and this directory no longer exists.
Type the following command:

# mkdir /Storage/data

6 (Conditional) Issue the mount(8) command to mount /Storage/data on this


node.
Perform this step if you created a /Storage/data directory.
This command has the following format:

mount -t vxfs /dev/vx/dsk/disk_group_name/volume_name /Storage/data

For disk_group_name and volume_name, specify the full hardware path to


the /Storage/data volume that you created on this node.
7 (Conditional) Create the /Storage/databases directory.
Perform this step if a /Storage/databases partition existed on this node
before the disaster and this directory no longer exists.
Type the following command:

# mkdir /Storage/databases

8 (Conditional) Issue the mount(8) command to mount /Storage/databases on


this node.
Perform this step if you created a /Storage/databases directory.
This command has the following format:

mount -t vxfs /dev/vx/dsk/disk_group_name/volume_name /Storage/databases

For disk_group_name and volume_name, specify the full hardware path to


the /Storage/databases volume that you created on this node.
Disaster recovery for clustered storage pools 165
Recovering one active node

9 (Conditional) Specify correct directory permissions for /Storage/databases.


Perform this step if you created a /Storage/databases partition.
Type the following command:

# chown -R pddb.pddb /Storage/databases

10 Proceed to the following:


See “Running the DR_Restore_all script” on page 165.

Running the DR_Restore_all script


The following procedure explains how to complete the last disaster recovery
preparation steps and points you to the procedure that explains how to run the
DR_Restore_all.sh script.

To run the DR_Restore_all.sh script


1 (Conditional) Reinstall the NetBackup client software on all failed nodes.
Perform this step if a NetBackup client was installed on the failed nodes before
the disaster.
If you back up your storage pool to NetBackup or if you use the NetBackup
export engine, install the NetBackup client on the failed nodes now. For
information about how to install a NetBackup client, see your NetBackup
documentation.
2 (Conditional) Clean up after a failed full disaster recovery backup.
Perform the following procedure if a previous full disaster recovery backup
failed:
See “(Conditional) Cleaning up after a failed full disaster recovery backup”
on page 172.
3 (Conditional) Recreate your topology.
Perform this step if you enable encryption during your disaster recovery
backups and if you backed up your storage pool to a Samba shared file system
or to a third-party product.
See “(Conditional) Recreate your topology information” on page 174.
4 Run the DR_Restore_all.sh script to recover the data.
See “Running the DR_Restore_all_script to recover the data” on page 175.
166 Disaster recovery for clustered storage pools
Recovering from a data storage corruption

Recovering from a data storage corruption


It is possible for the disks that host the storage partitions to become corrupted.
You need to perform a disk corruption recovery if the following conditions are
present:
■ The PureDisk software failed or if one or more of the databases appears to be
corrupted. To determine the state of the databases, examine the log file in the
following directory:

/Storage/log/pddb/postgresql.log

Messages such as the following in the log file are possible signs of a corrupted
database:

FATAL: could not open file


"/Storage/databases/pddb/data/global/1262": No such file or
directory

LOG: could not open temporary statistics file


"/Storage/databases/pddb/data/global/pgstat.tmp.4391": No such
file or directory

■ /Storage failed or one or more disks have crashed.

■ When the storage pool generates unexpected results after a failover.


■ When a failover failed to complete.
The following procedure explains how to recover from a data storage corruption
disaster.
To recover from a data storage corruption
1 Recreate the storage partitions that failed.
See “Recreate the storage partitions that failed” on page 166.
2 Recreate the disks and volumes.
See “Recreating disks and volumes” on page 167.
3 Run the DR_Restore_all.sh script.
See “Running the DR_Restore_all script” on page 168.

Recreate the storage partitions that failed


The following procedure explains how to recreate the storage partitions that
failed.
Disaster recovery for clustered storage pools 167
Recovering from a data storage corruption

To recreate the storage partitions that failed


1 Clean the cluster service groups.
Perform the following steps:
■ Log on to the Cluster Manager Java console.
■ Verify the state of the service groups.
After a disaster, some service groups can be in the faulted state on one or
more nodes.
■ For each service group that has faulted, right-click the group and select
clear fault - auto to clear the faulted state of the service group.
■ Place all service groups offline.
Include all service groups, even those service groups on the nodes that
the disaster did not affect. Right-click each group and select offline - Any
system to shut down PureDisk on all nodes.
■ Remove all service groups, even those that reside on the nodes that the
disaster did not affect.
Right-click each group and select delete to prevent VCS from taking any
action while you recover the PureDisk storage pool.

2 (Conditional) Replace the failed storage mounts.


Perform this step if necessary.
For example, replace the disk hardware that failed.
3 For each failed node, create new storage partitions to replace the crashed
disks.
Perform the following procedure if you had to replace the disk hardware:
See “(Conditional) Using YaST to create the storage partitions” on page 365.
4 Proceed to the following:
See “Recreating disks and volumes” on page 167.

Recreating disks and volumes


The following procedure explains how to recreate the disks, recreate the disk
volumes, and mount the disk volumes.
168 Disaster recovery for clustered storage pools
Recovering from a data storage corruption

To recreate disks and volumes


1 Perform the following procedure:
See “Recreating disks and volumes” on page 162.
2 Proceed to the following:
See “Running the DR_Restore_all script” on page 168.

Running the DR_Restore_all script


The following procedure explains how to complete the last disaster recovery
preparation steps and points you to the procedure that explains how to run the
DR_Restore_all.sh script.

To run the DR_Restore_all script


1 (Conditional) Extract the storage pool software.
Perform this step if you replaced disk hardware that was attached to the node
that hosted the storage pool authority service.
For example, if you mount the PureDisk software DVD to /cdrom, type the
following command:

# /cdrom/puredisk/install.sh --force

2 For each node, type the following command to configure a service address
on the public NIC in the node:

# ip a a ip_address dev ethn

For ip_address, specify the service IP address of the service you want to
configure. For n, specify the number of the public network interface card
(NIC) on this node.
For example, on node1.acme.com, you could type the following command:

# ip a a 100.100.100.101 dev eth1

Note: Make sure to repeat this step on each active node, including the healthy
nodes. Because you removed the service groups for the entire storage pool,
you need to recreate the service addresses for each node at this time.
Disaster recovery for clustered storage pools 169
Recovering from a complete storage pool disaster (clustered, complete storage pool disaster)

3 (Conditional) Clean up after a failed full disaster recovery backup.


Perform the following procedure if a previous full disaster recovery backup
failed:
See “(Conditional) Cleaning up after a failed full disaster recovery backup”
on page 172.
4 (Conditional) Recreate your topology.
Perform this step if you enable encryption during your disaster recovery
backups and if you backed up your storage pool to a Samba shared file system
or to a third-party product.
See “(Conditional) Recreate your topology information” on page 174.
5 Run the DR_Restore_all.sh script to recover the data.
See “Running the DR_Restore_all_script to recover the data” on page 175.

Recovering from a complete storage pool disaster


(clustered, complete storage pool disaster)
If your PureDisk nodes and shared storage have failed, you need to recover the
entire storage pool. The following procedure explains how to recover all the nodes
in a clustered storage pool.
To recover all the nodes in a clustered storage pool
1 Reinstall PDOS and disable the service groups.
See “Reinstalling PDOS and disabling the service groups” on page 169.
2 Recreate the disks and volumes.
See “Recreating disks and volumes” on page 170.
3 Run the DR_Restore_all.sh script.
See “Running the DR_Restore_all script” on page 170.

Reinstalling PDOS and disabling the service groups


The following procedure explains how to reinstall PDOS, offline the service groups,
and remove all the service groups.
170 Disaster recovery for clustered storage pools
Recovering from a complete storage pool disaster (clustered, complete storage pool disaster)

To reinstall PDOS and disable the service groups


1 Reinstall PDOS on all nodes and run the storage pool configuration wizard
to configure your storage pool.
Reinstall and reconfigure your environment as if this were a new installation.
Make sure to configure your storage partitions, nodes, and services as they
were before the disaster. If you completed the cluster planning spreadsheet
when you performed the initial installation, use it now to help you recreate
your topology.
For information about how to install PDOS and configure the PureDisk
application, see the PureDisk Storage Pool Installation Guide.
2 Use the Cluster Manager Java Console to offline all service groups from all
the nodes.
From the Cluster Manager Java Console, right-click the cluster group, and
select Offline > All Systems.
3 Remove all service groups.
Right-click each group and select delete to prevent VCS from taking any
action while you recover the PureDisk storage pool.
4 Proceed to the following:
See “Recreating disks and volumes” on page 170.

Recreating disks and volumes


The following procedure explains how to recreate the disks, recreate the disk
volumes, and mount the disk volumes.
To recreate disks and volumes
1 Perform the following procedure:
See “Recreating disks and volumes” on page 162.
2 Proceed to the following:
See “Running the DR_Restore_all script” on page 170.

Running the DR_Restore_all script


The following procedure explains how to complete the last disaster recovery
preparation steps and points you to the procedure that explains how to run the
DR_Restore_all.sh script.
Disaster recovery for clustered storage pools 171
Recovering from a complete storage pool disaster (clustered, complete storage pool disaster)

To run the DR_Restore_all script


1 Remove the following files from the storage pool authority node:
■ /Storage/etc/topology.ini

■ /Storage/etc/topology_nodes.ini

2 For each node, type the following command to configure a service address
on the public NIC in the node:

# ip a a ip_address dev ethn

For ip_address, specify the service IP address of the service you want to
configure. For n, specify the number of the public network interface card
(NIC) on this node.
For example, on node1.acme.com, you could type the following command:

# ip a a 100.100.100.101 dev eth1

Note: Make sure to repeat this step on each active node, including the healthy
nodes. Because you removed the service groups for the entire storage pool,
you need to recreate the service addresses for each node at this time.

3 (Conditional) Reinstall the NetBackup client software on all failed nodes.


Perform this step if a NetBackup client was installed on the failed nodes before
the disaster.
If you back up your storage pool to NetBackup or if you use the NetBackup
export engine, install the NetBackup client on the failed nodes now. For
information about how to install a NetBackup client, see your NetBackup
documentation.
4 (Conditional) Clean up after a failed full disaster recovery backup.
Perform the following procedure if a previous full disaster recovery backup
failed:
See “(Conditional) Cleaning up after a failed full disaster recovery backup”
on page 172.
172 Disaster recovery for clustered storage pools
(Conditional) Cleaning up after a failed full disaster recovery backup

5 (Conditional) Recreate your topology.


Perform this step if you enable encryption during your disaster recovery
backups and if you backed up your storage pool to a Samba shared file system
or to a third-party product.
See “(Conditional) Recreate your topology information” on page 174.
6 Run the DR_Restore_all.sh script to recover the data.
See “Running the DR_Restore_all_script to recover the data” on page 175.

(Conditional) Cleaning up after a failed full disaster


recovery backup
Perform one of the following procedures only if a previous full disaster recovery
backup failed:
■ See “Cleaning up after a failed full NetBackup disaster recovery backup
(clustered, complete storage pool disaster)” on page 172.
■ See “Cleaning up after a failed full Samba disaster recovery backup (clustered,
complete storage pool disaster)” on page 173.
■ See “Cleaning up after a failed full third-party product disaster recovery backup
(clustered, complete storage pool disaster)” on page 173.

Cleaning up after a failed full NetBackup disaster recovery backup


(clustered, complete storage pool disaster)
To clean up after a failed full NetBackup disaster recovery backup, you must expire
(or purge) corrupted backup images from previous failed backups.
To expire the backup images from failed backups
◆ From the NetBackup interface, manually expire any NetBackup images that
are newer than the date of the last successful backup.
Symantec highly recommends that you purge any corrupted backup images
from any unsuccessful backups before you try the restore.
See the NetBackup Administrator’s Guide, Volume I for information on how
to search the catalog for images.
You can search the catalog and use the PureDisk server as the client name.
Look for the images that have the Standard and DataStore disaster recovery
policies.
Disaster recovery for clustered storage pools 173
(Conditional) Cleaning up after a failed full disaster recovery backup

Cleaning up after a failed full Samba disaster recovery backup


(clustered, complete storage pool disaster)
To clean up after a failed full Samba disaster recovery backup, you must remove
any remaining corrupted files from the incomplete backup.
The PureDisk disaster recovery full backup scripts create a previous directory
at the root level of the share that contains the previous backup data. PureDisk
creates this directory at the start of the full backup and removes it at the end of
a successful backup.
If the backup fails, the previous directory still exists on the share. If this directory
exists at the start of another backup run, PureDisk preserves the contents. It
deletes the current backup data from the failed backup before it starts the new
backup. PureDisk removes the previous directory only when a new full backup
is successful.
To remove corrupted files from an incomplete backup
1 Search for a directory that is called previous in the root level of the share
that contains the previous backup data.
2 Move the contents of the previous directory to the root level of the share.
3 Remove the directory called previous.

Cleaning up after a failed full third-party product disaster recovery


backup (clustered, complete storage pool disaster)
To clean up after a failed full third-party product disaster recovery backup, you
must remove any remaining corrupted files from the incomplete backup.
Symantec recommends that you follow some procedure at your site to ensure that
the complete contents of a disaster recovery backup are preserved. The
preservation ensures consistent backup data.
For example, assume that you perform a full disaster recovery backup each
Monday. On the first Monday of the month, the backup runs without problems.
On the second Monday of the month, the disaster recovery backup removes the
contents of the first Monday’s backup. However, the second full disaster recovery
backup fails. This failure leaves you without any consistent backup data.
To remove corrupted files from an incomplete backup
1 Examine your backup repository.
2 Locate the last successful backup.
3 Remove any subsequent failed backups from the system.
174 Disaster recovery for clustered storage pools
(Conditional) Recreate your topology information

(Conditional) Recreate your topology information


Perform one of the procedures in this section if you enabled encryption for your
disaster recovery backups.
You need your storage pool's topology information in order to perform the restore.
Perform one of the following procedures:
■ See “(Conditional) Recreating the topology with current topology information
(Samba or third-party, clustered recovery)” on page 174.
■ See “(Conditional) Recreating the topology without current topology
information (Samba or third-party, clustered recovery)” on page 174.

(Conditional) Recreating the topology with current topology information


(Samba or third-party, clustered recovery)
Perform the following procedure if all of the following conditions are true:
■ You enabled encryption in the PureDisk disaster recovery backup policy.
■ You have a backup copy of this storage pool’s topology and you can recreate
it.
To recreate the topology when you have the storage pool’s topology information
◆ Enter the following command and follow the prompts to recreate this storage
pool’s topology:

# /opt/pdinstall/edit_topology.sh

Your goal is to recreate the PureDisk topology so that it matches the topology
that existed before the disaster.
For information about the storage pool’s topology and node identification
information, see the worksheets that you completed during this storage pool’s
installation.

(Conditional) Recreating the topology without current topology


information (Samba or third-party, clustered recovery)
Perform the following procedure if all of the following conditions are true:
■ You enabled encryption in the PureDisk disaster recovery backup policy.
■ You do not have a backup copy of this storage pool’s topology.
Disaster recovery for clustered storage pools 175
Running the DR_Restore_all_script to recover the data

To recreate the topology without current topology information


1 Enter the following command and follow the prompts to recreate this storage
pool’s topology:

# /opt/pdinstall/edit_topology.sh

Include only the storage pool authority in the topology.

Note: Make sure you enter the correct storage pool ID. Make sure that all
passwords you use during the disaster recovery process are the same as those
that existed before the disaster occurred.

2 Enter the following command to install the new storage pool:

# /opt/pdinstall/install_newStoragePool.sh

3 Remove the following files:

/Storage/etc/topology.ini
/Storage/etc/topology_nodes.ini

The preceding files are not needed at this time. The disaster recovery script
restores these files from the backup.

Running the DR_Restore_all_script to recover the data


After you reinstall the PureDisk software that was destroyed in the disaster, your
next task is to run the DR_Restore_all.sh script. The DR_Restore_all.sh script
performs the following actions:
■ Prompts for all of the information that is required to restore an entire storage
pool.
■ Restores the data that was backed up.
■ Optimizes the content router restore. This action occurs when the disaster
affected only a subset of the content routers.
The DR_Restore_all.sh script fully restores the /Storage/data directory of
all failed content router nodes.
If the configuration includes any content routers that do not need to be fully
recovered because no diaster occurred, the script does minimal restores.
The restores bring the content routers to a state that is consistent with the
point in time of the last disaster recovery backup. Since the last backup was
done, some data segments might have been removed or added.
176 Disaster recovery for clustered storage pools
Running the DR_Restore_all_script to recover the data

In these cases, the script does the following:


■ Restores all databases and configuration files.
■ Restores the segment containers so that they are consistent with the content
router databases.
■ Restores all segment containers that a removal job has changed or deleted
since last the backup.
■ Creates PureDisk service groups in VCS.

The DR_Restore_all.sh script prompts you to specify different information


depending on the method you used to perform the disaster recovery backup.
Proceed to one of the following, depending on your disaster recovery backup
method:
■ See “Recovering a PureDisk clustered storage pool from a NetBackup disaster
recovery backup” on page 176.
■ See “Recovering a PureDisk clustered storage pool from a Samba disaster
recovery backup” on page 183.
■ See “Recovering a PureDisk clustered storage pool from a third-party product
disaster recovery backup” on page 191.

Recovering a PureDisk clustered storage pool from a NetBackup


disaster recovery backup
The following procedure explains how to recover a PureDisk clustered storage
pool from a NetBackup disaster recovery backup.
To recover a PureDisk clustered storage pool from a NetBackup disaster recovery
backup
1 Run the DR_Restore_all script - phase 1.
See “Running the DR_Restore_all script - phase 1 (NetBackup, clustered
recovery)” on page 177.
2 Run the DR_Restore_all script - phase 2.
See “Running the DR_Restore_all script - phase 2 (NetBackup, clustered
recovery)” on page 180.
3 Finish the restore.
See “Finishing the restore (NetBackup, clustered recovery)” on page 182.
Disaster recovery for clustered storage pools 177
Running the DR_Restore_all_script to recover the data

Running the DR_Restore_all script - phase 1 (NetBackup,


clustered recovery)
During a disaster recovery, you need to run the DR_Restore_all.sh script in two
phases. In phase 1, verify host and service address mappings. Rerun the script
after you verify that the host address and service address mappings match. Later,
in phase 2, continue to answer the script’s prompts regarding the storage pool.
To verify the host and service address mappings
1 Log on to the storage pool authority node as root.
2 (Conditional) Make sure that the root_hash file exists on this node.
Perform this step if the storage pool uses an external root broker. For more
information about external root brokers, see the PureDisk Getting Started
Guide and the PureDisk Storage Pool Installation Guide.
3 Make sure that all storage partitions are mounted.
For example, use the following mount(8) command to verify the mounts:

# mount | grep Storage

4 Type the following command to run the disaster recovery script:

# /opt/pdinstall/DR_Restore_all.sh

5 Specify the method you used to back up PureDisk.


For example:

Please choose the method you used to do the Disaster Recovery


Backup
1. NetBackup
2. Samba Share
3. Local Directory
Backup Method (1|2|3): 1
178 Disaster recovery for clustered storage pools
Running the DR_Restore_all_script to recover the data

6 Affirm whether you installed the NetBackup client on the node.


If you answer n, the script stops, and you must restart the script after you
install the NetBackup client software.
For example:

Is the NetBackup client installed on all nodes and pointing to


the NBU Server that was used to do the backups? [Yn]: y

7 Provide the full path of any upgrade patch files that need to be applied.

Please provide the location of the upgrade patch tar file.


For multiple patches, enter in the order they should be applied.
(leave blank for none) :

For example:
/root/NB_PDE_6.6.1.17630.tar

For multiple upgrade patches that need to be applied, provide the latest that
can be installed on top of the base version. Otherwise, provide in the order
the patches should be applied (applicable to EEBs).
Leave blank and press Enter if there are no patches to apply.
Next, respond to the following prompt:

Are there any more patches that need to be applied? (Yn) :

Answer yes (y) to apply additional patches. Answer no (n) to continue with
the disaster recovery process.
8 Respond to the prompts regarding the topology file.
Directory /Storage/etc must contain the following files:
■ topology.ini or topology.ini.enc

■ topology_nodes.ini

If these files are not present, the script issues the following prompt:

Please provide the virtual Fully Qualified Domain Name of your SPA:

Enter the service fully qualified domain name (FQDN) for the storage pool
authority (SPA) node. The script retrieves the files from the location you
provide.
If topology.ini.enc is present, the script issues the following prompt for
the password:
Disaster recovery for clustered storage pools 179
Running the DR_Restore_all_script to recover the data

topology.ini file needs to be decrypted before proceeding


enter aes-256-cbc decryption password:

Type the password that you use for the storage pool configuration wizard.
9 Observe the messages that the script produces and take one of the following
actions:
■ If the host and service mappings are synchronized properly with the
topology on the storage pool, the script continues. Proceed to the following
section:
See “Running the DR_Restore_all script - phase 2 (NetBackup, clustered
recovery)” on page 180.
■ If the host and service mappings are not synchronized with the topology
on the storage pool, the script issues the following message and stops:

WARNING: You are running in a VCS environment. This means the topology_nodes.ini file
that has just been restored may be out of date. VCS failover events could have changed
the physical - service address mapping for nodes between the time the DR backup last ran
and now.
To verify these mappings, please run /opt/pdinstall/edit_topology.sh and select option
"Edit a node" to edit all PureDisk nodes and spare nodes in your topology. Verify that
for PureDisk nodes, the service address in the "Virtual IP/Hostname" entry is on the
same node as the physical address in the "IP/Hostname" entry. If not, update the
"IP/Hostname" entry to contain the correct physical address for the service address.
Verify that for spare nodes, the "IP/Hostname" entry is really the physical address of
a node that is currently acting as a spare node.
Also, select the option "Configure root broker" and verify that the root broker mapping
is correct.

Once you verified the physical - service address mapping is correct for all nodes, and
the root broker mapping, please run this script again.

Note: As the preceding warning message explains, at this time, it is


important to run /opt/pdinstall/edit_topology.sh to start the topology
editor. In the topology editor, for each node, select Edit a node, and click
OK. This action creates the required VCS configuration file on each node.

For example, failovers might have occurred between the time of the last
disaster recovery backup and this restore. If so, the restore topology files
have invalid host and service address mappings for the nodes of the storage
pool.
180 Disaster recovery for clustered storage pools
Running the DR_Restore_all_script to recover the data

Correct these inconsistencies and run the DR_Restore_all script again.

10 (Conditional) Verify the NIC identifiers and (conditionally) correct the NIC
identifiers.
Perform this step if you reinstalled PDOS on any nodes.
When you reinstall PDOS, the NIC identifiers can be different from the NIC
identifiers that existed in the previous PDOS installation.
Perform the following steps to verify and, if necessary, correct the NIC
identifiers:
■ Log into the storage pool authority node and type the following command
to start the topology editor:

# /opt/pdinstall/edit_topology.sh

Use the topology editor to check and, if necessary, correct the NIC
identifier for the public NIC. The topology editor displays information
about the public NIC below the service addresses. You can change
information about the public NIC in the topology editor.
■ Open file /Storage/etc/topology_nodes.ini.
■ Search for the following keywords: firstprivate and secondprivate.
■ Verify that the ethn identifiers for firstprivate and secondprivate
point to the correct NICs.
■ (Conditional) Correct the ethn identifiers in the topology_nodes.ini file.
Perform this step if the ethn identifiers differ from those that existed
when PDOS was installed initially.

Running the DR_Restore_all script - phase 2 (NetBackup,


clustered recovery)
In phase 2, the script restores the files from the backup. The DR_Restore_all.sh
script initiates a dialog with you. You need to answer the questions that the script
displays.
To restore a storage pool
1 Examine the topology information the script displays and specify the nodes
you want to restore.
The script reads the topology file and presents a display like the following
example:
Disaster recovery for clustered storage pools 181
Running the DR_Restore_all_script to recover the data

STORAGE POOL TOPOLOGY


Node IP Address Services
---- ---------- ---------
1 10.80.62.1 spa mbe mbs cr nbu
2 10.80.62.2 cr
Node number(s): 1

The preceding example shows the topology that the script can restore in this
disaster recovery operation. Examine this information for accuracy, and
specify all nodes for restore. Use commas to separate the node numbers.
After the reinstall is complete, the script performs the following actions:
■ Remounts the mount points.
■ Restores all the data on the failed nodes.
■ Restores all the databases on all the nodes.
■ Restores any removed data. A data removal job might have been run since
the last time the databases were backed up. For this reason, the script also
restores the removed data on the nodes that did not fail. This action
synchronizes the databases and data.

2 Respond to queries from the script regarding passwords.


The restore process can take hours to complete. As the script runs, you might
be asked to specify the system passwords for the remote (or local) nodes. This
prompt occurs early in the process during secure shell (SSH) authentication.
3 (Conditional) Respond to the queries from the upgrade patches.
Refer to the README files that came with the upgrade patches for specific
details about the queries.
Respond no to creating jobs for upgrading agents.
At the end of the upgrade installation, the script prompts you to encrypt the
topology.ini file. Answer no at this time to continue. You will have a chance
to encrypt the topology.ini file at the end of the disaster recovery process.
182 Disaster recovery for clustered storage pools
Running the DR_Restore_all_script to recover the data

4 When the restore is complete, answer the prompts about encryption of the
topology.ini file.

For example:

Would you like to encrypt the topology.ini file? [Yn]:y


Encrypting /opt/pdinstall/topology.ini
enter aes-256-cbc encryption password: xxxx
Verifying - enter aes-256-cbc encryption password:

Type the password that you use for the storage pool configuration wizard.
5 Observe the completion message.
When the operation completes successfully, the script displays the following
message:

Disaster recovery complete

Finishing the restore (NetBackup, clustered recovery)


The following procedure explains the tasks you need to perform to finish the
restore.
To finish the restore
1 On each content router node, log on as root.
2 Run the following command to change DEREF to Yes.

# /opt/pdcr/bin/crcontrol -m DEREF=Yes

3 Repeat the following steps until you have run this command on all content
router nodes:
■ Step 1
■ Step 2
Disaster recovery for clustered storage pools 183
Running the DR_Restore_all_script to recover the data

4 (Conditional) Run the following script on the storage pool authority node to
upgrade the security protocol:

# /opt/pdinstall/disable_sslv2.sh

Perform this step if you ran the disable_sslv2.sh script on this storage pool
at any time. The disaster recovery restore does not enable this script
automatically.
Symantec recommends that you run the script unless PureDisk 6.5.x storage
pools need to replicate to this storage pool.
5 (Conditional) Reenable the NetBackup export engine on any nodes that hosted
only a NetBackup export engine service.
Perform this step only if you have a node that hosted only a NetBackup export
engine service.
For information about how to enable a NetBackup export engine, see the
following:
See “About exporting data to NetBackup” on page 73.
6 Perform a full disaster recovery backup.
Make sure you perform a full disaster recovery backup before you perform
any file backups or perform any incremental disaster recovery backups.

Recovering a PureDisk clustered storage pool from a Samba disaster


recovery backup
The following procedure explains how to recover a PureDisk clustered storage
pool from a disaster recovery backup that you wrote to Samba.
To recover a PureDisk clustered storage pool from a Samba disaster recovery backup
1 Run the DR_Restore_all script - phase 1.
See “Running the DR_Restore_all script - phase 1 (Samba, clustered recovery)”
on page 184.
2 Run the DR_Restore_all script - phase 2.
See “Running the DR_Restore_all script - phase 2 (Samba, clustered recovery)”
on page 188.
3 Finish the restore.
See “Finishing the restore (Samba, clustered recovery)” on page 190.
184 Disaster recovery for clustered storage pools
Running the DR_Restore_all_script to recover the data

Running the DR_Restore_all script - phase 1 (Samba, clustered


recovery)
During a disaster recovery, you need to run the DR_Restore_all.sh script in two
phases. In this first phase, verify host and service address mappings. If the host
address and service address mappings do not match, the script exits and you need
to correct the mappings. If necessary, rerun the script until the host and service
address mappings match. Later, in phase 2, continue to answer the script’s prompts
regarding the storage pool.
To verify the host and service address mappings
1 Log on to the storage pool authority node.
2 (Conditional) Make sure that the root_hash file exists on this node.
Perform this step if the storage pool uses an external root broker. For more
information about external root brokers, see the PureDisk Getting Started
Guide and the PureDisk Storage Pool Installation Guide.
3 Make sure that all storage partitions are mounted.
For example, use the following mount(8) command to verify the mounts:

# mount | grep Storage

4 Type the following command to run the disaster recovery script:

# /opt/pdinstall/DR_Restore_all.sh

5 Specify the method you used to back up PureDisk.


For example:

Please choose the method you used to do the Disaster Recovery


Backup
1. NetBackup
2. Samba Share
3. Local Directory
Backup Method (1|2|3): 2

6 Provide the information that PureDisk needs to mount the shared file system.
For example:

Please enter remote samba share (i.e.


//11.88.77.33/remoteSambaShare):
//rmns1.min.boston.com/PD_DRdata
Disaster recovery for clustered storage pools 185
Running the DR_Restore_all_script to recover the data

7 Provide information about the local mount point.


The local mount point is the path name of a directory on which to mount the
share. If you used the DR_restore_all script in the default PureDisk location,
the mount point must be /DRdata.

Please enter local mount point (default: /DRdata):

If appropriate, press Enter to accept the default.


8 Provide authentication information.
This information includes the user name, password, and work group for
Samba on the remote file server.
For example:

Please enter samba user name: pduser


Please enter samba password: pdpwd
Please enter samba workgroup: pdwgroup
Please enter location to restore CR data and spool area from
(default: /DRdata):

9 Type the full system path name to the disaster recovery script used to save
your PureDisk data.
If you used the DR_restore_all.sh script in the default PureDisk location,
press Enter.
If you supplied your own restore script, PureDisk does not protect it. The
scripts are overwritten during a restore procedure if they remain in the default
installation directory (/opt). You must write them to another directory for
protection, such as /usr or /tmp.
For example:

Please enter full path of customized DR restore script (default:


/opt/pdconfigure/scripts/support/DR_BackupSampleScripts/DRresto
re.sh):
186 Disaster recovery for clustered storage pools
Running the DR_Restore_all_script to recover the data

10 Respond to the prompts regarding encryption.


If you used a pdkeyutil password to enable encryption of your disaster
recovery data during backup, supply the password that you specified.
If you did not use pdkeyutil, type no; the script does not ask you for a
password.
For example:

Was encryption used to do the Disaster Recovery Backup [Yn]: y


Please provide the pdkeyutil pass phrase: *******

11 Provide the full path of any upgrade patch files that need to be applied.
Please provide the location of the upgrade patch tar file.
For multiple patches, enter in the order they should be applied.
(leave blank for none) :

For example:
/root/NB_PDE_6.6.1.17630.tar

For multiple upgrade patches that need to be applied, provide the latest that
can be installed on top of the base version. Otherwise, provide in the order
the patches should be applied (applicable to EEBs).
Leave blank and press Enter if there are no patches to apply.
Next, respond to the following prompt:

Are there any more patches that need to be applied? (Yn) :

Answer yes (y) to apply additional patches. Answer no (n) to continue with
the disaster recovery process.
12 Type the storage pool ID for the storage pool that you want to restore.
This ID is the value specified for the storagepoolid property in the
topology.ini file. This value is used to retrieve the topology file.

For example:

Please enter Storage Pool ID:

13 Respond to the prompts regarding the topology file.


Directory /Storage/etc must contain the following files:
■ topology.ini or topology.ini.enc

■ topology_nodes.ini
Disaster recovery for clustered storage pools 187
Running the DR_Restore_all_script to recover the data

If these files are not present, the script retrieves them. If topology.ini.enc
is present, the script issues the following prompt for the password:

topology.ini file needs to be decrypted before proceeding


enter aes-256-cbc decryption password:

Type the password that you use for the storage pool configuration wizard.
14 Observe the messages that the script produces and take one of the following
actions:
■ If the host and service mappings are synchronized properly with the
topology on the storage pool, the script continues. Proceed to the following
section:
See “Running the DR_Restore_all script - phase 2 (Samba, clustered
recovery)” on page 188.
■ If the host and service mappings are not synchronized with the topology
on the storage pool, the script issues the following message and stops:

WARNING: You are running in a VCS environment. This means the topology_nodes.ini file
that has just been restored may be out of date. VCS failover events could have changed
the physical - service address mapping for nodes between the time the DR backup last ran
and now.
To verify these mappings, please run /opt/pdinstall/edit_topology.sh and select option
"Edit a node" to edit all PureDisk nodes and spare nodes in your topology. Verify that
for PureDisk nodes, the service address in the "Virtual IP/Hostname" entry is on the
same node as the physical address in the "IP/Hostname" entry. If not, update the
"IP/Hostname" entry to contain the correct physical address for the service address.
Verify that for spare nodes, the "IP/Hostname" entry is really the physical address of
a node that is currently acting as a spare node.
Also, select the option "Configure root broker" and verify that the root broker mapping
is correct.

Once you verified the physical - service address mapping is correct for all nodes, and
the root broker mapping, please run this script again.

Note: As the preceding warning message explains, at this time, it is


important to run /opt/pdinstall/edit_topology.sh to start the topology
editor. In the topology editor, for each node, select Edit a node, and click
OK. This action creates the required VCS configuration file on each node.

For example, failovers might have occurred between the time of the last
disaster recovery backup and this restore. If so, the restore topology files
188 Disaster recovery for clustered storage pools
Running the DR_Restore_all_script to recover the data

have invalid host address and service address mappings for the nodes of
the storage pool.
Correct these inconsistencies and run the DR_Restore_all script again.

15 (Conditional) Verify the NIC identifiers and (conditionally) correct the NIC
identifiers.
Perform this step if you reinstalled PDOS on any nodes.
When you reinstall PDOS, the NIC identifiers can be different from the NIC
identifiers that existed in the previous PDOS installation.
Perform the following steps to verify and, if necessary, correct the NIC
identifiers:
■ Log into the storage pool authority node and type the following command
to start the topology editor:

# /opt/pdinstall/edit_topology.sh

Use the topology editor to check and, if necessary, correct the NIC
identifier for the public NIC. The topology editor displays information
about the public NIC below the service addresses. You can change
information about the public NIC in the topology editor.
■ Open file /Storage/etc/topology_nodes.ini.
■ Search for the following keywords: firstprivate and secondprivate.
■ Verify that the ethn identifiers for firstprivate and secondprivate
point to the correct NICs.
■ (Conditional) Correct the ethn identifiers in the topology_nodes.ini file.
Perform this step if the ethn identifiers differ from those that existed
when PDOS was installed initially.

Running the DR_Restore_all script - phase 2 (Samba, clustered


recovery)
In phase 2, the script restores the files from the backup. The DR_Restore_all.sh
script initiates a dialog with you. You need to answer the questions that the script
displays.
To restore a storage pool
1 Examine the topology information the script displays and specify the nodes
you want to restore.
The script reads the topology file and presents a display like the following
example:
Disaster recovery for clustered storage pools 189
Running the DR_Restore_all_script to recover the data

STORAGE POOL TOPOLOGY


Node IP Address Services
---- ---------- ---------
1 10.80.62.1 spa mbe mbs cr nbu
2 10.80.62.2 cr
Node number(s): 1

The preceding example shows the topology that the script can restore in this
disaster recovery operation. Examine this information for accuracy, and
specify all nodes for restore. Use commas to separate the node numbers.
After the reinstall is complete, the script performs the following actions:
■ Remounts the mount points.
■ Restores all the data on the failed nodes.
■ Restores all the databases on all the nodes.
■ Restores the removed data. A data removal job might have been run since
the last time the databases were backed up. For this reason, the script also
restores the removed data on the nodes that did not fail. This method
synchronizes the databases and data.

2 Respond to queries from the script regarding passwords.


The restore process can take hours to complete. As the process runs, you
might be asked to specify the system passwords for the remote (or local)
nodes. This prompt occurs early in the process during SSH authentication.
3 (Conditional) Respond to the queries from the upgrade patches.
Refer to the README files that came with the upgrade patches for specific
details about the queries.
Respond no to creating jobs for upgrading agents.
At the end of the upgrade installation, the script prompts you to encrypt the
topology.ini file. Answer no at this time to continue. You will have a chance
to encrypt the topology.ini file at the end of the disaster recovery process.
190 Disaster recovery for clustered storage pools
Running the DR_Restore_all_script to recover the data

4 When the restore is complete, answer the prompts about encryption of the
topology.ini file.

For example:

Would you like to encrypt the topology.ini file? [Yn]:y


Encrypting /opt/pdinstall/topology.ini
enter aes-256-cbc encryption password: xxxx
Verifying - enter aes-256-cbc encryption password:

Type the password that you use for the storage pool configuration wizard.
5 Observe the completion message.
When the operation completes successfully, the script displays the following
message:

Disaster recovery complete

Finishing the restore (Samba, clustered recovery)


The following procedure explains the tasks you need to perform to finish the
restore.
To finish the restore
1 On each content router node, log on as root.
2 Run the following command to change DEREF to Yes.

# /opt/pdcr/bin/crcontrol -m DEREF=Yes

3 Repeat the following steps until you have run this command on all content
router nodes:
■ Step 1
■ Step 2
Disaster recovery for clustered storage pools 191
Running the DR_Restore_all_script to recover the data

4 (Conditional) Run the following script on the storage pool authority node to
upgrade the security protocol:

# /opt/pdinstall/disable_sslv2.sh

Perform this step if you ran the disable_sslv2.sh script on this storage pool
at any time. The disaster recovery restore does not enable this script
automatically.
Symantec recommends that you run the script unless PureDisk 6.5.x storage
pools need to replicate to this storage pool.
5 (Conditional) Reenable the NetBackup export engine on any nodes that hosted
only a NetBackup export engine service.
Perform this step only if you have a node that hosted only a NetBackup export
engine service.
For information about how to enable a NetBackup export engine, see the
following:
See “About exporting data to NetBackup” on page 73.
6 Perform a full disaster recovery backup.
Make sure you perform a full disaster recovery backup before you perform
any file backups or perform any incremental disaster recovery backups.

Recovering a PureDisk clustered storage pool from a third-party


product disaster recovery backup
The following procedure explains how to recover a PureDisk clustered storage
pool from a disaster recovery backup that you wrote to a third-party product.
To recover a PureDisk clustered storage pool from a third-party product disaster
recovery backup
1 Run the DR_Restore_all script - phase 1.
See “Running the DR_Restore_all script - phase 1 (third-party, clustered
recovery)” on page 192.
2 Run the DR_Restore_all script - phase 2.
See “Running the DR_Restore_all script - phase 2 (third-party, clustered
recovery)” on page 196.
3 Finish the restore.
See “Finishing the restore (third-party, clustered recovery)” on page 198.
192 Disaster recovery for clustered storage pools
Running the DR_Restore_all_script to recover the data

Running the DR_Restore_all script - phase 1 (third-party,


clustered recovery)
During a disaster recovery, you need to run the DR_Restore_all.sh script in two
phases. In this first phase, verify host and service address mappings. If the host
address and service address mappings do not match, the script exits and you need
to correct the mappings. If necessary, rerun the script until the host address and
service address mappings match. Later, in phase 2, continue to answer the script’s
prompts regarding the storage pool.
To verify the host and service address mappings
1 Log on to the storage pool authority node.
2 (Conditional) Make sure that the root_hash file exists on this node.
Perform this step if the storage pool uses an external root broker. For more
information about external root brokers, see the PureDisk Getting Started
Guide and the PureDisk Storage Pool Installation Guide.
3 Make sure that all storage partitions are mounted.
For example, use the following mount(8) command to verify the mounts:

# mount | grep Storage

4 Type the following command to run the disaster recovery script:

# /opt/pdinstall/DR_Restore_all.sh

5 Specify the method you used to back up PureDisk.


For example:

Please choose the method you used to do the Disaster Recovery


Backup
1. NetBackup
2. Samba Share
3. Local Directory
Backup Method (1|2|3): 3

6 Provide information about where the data to be restored is located.


Press Enter to accept the defaults for the following prompts:

Please enter location to restore metadata from (default:


/DRdata):
Please enter location to restore CR data and spool area from
(default: /DRdata):
Disaster recovery for clustered storage pools 193
Running the DR_Restore_all_script to recover the data

7 Type the full system path name to the disaster recovery script used to save
your PureDisk data.
If you used the DR_Restore_all.sh script in the default PureDisk location,
press Enter.
If you supplied your own restore script, remember that the scripts are not
protected. The scripts are overwritten during a restore procedure if they
remain in the default installation directory (/opt). To prevent this problem,
you must place them in another directory for protection (for example, in /usr
or /tmp).
For example:

Please enter full path of customized DR restore script (default:


/opt/pdconfigure/scripts/support/DR_BackupSampleScripts/DRresto
re.sh):

8 Respond to the prompts regarding encryption.


If you used a pdkeyutil password to enable encryption of your disaster
recovery data during backup, supply the password that you specified.
If you did not use pdkeyutil, type no; the script does not ask you for a
password.

Was encryption used to do the Disaster Recovery Backup [Yn]: y


Please provide the pdkeyutil pass phrase: *******
194 Disaster recovery for clustered storage pools
Running the DR_Restore_all_script to recover the data

9 Provide the full path of any upgrade patch files that need to be applied.

Please provide the location of the upgrade patch tar file.


For multiple patches, enter in the order they should be applied.
(leave blank for none) :

For example:
/root/NB_PDE_6.6.1.17630.tar

For multiple upgrade patches that need to be applied, provide the latest that
can be installed on top of the base version. Otherwise, provide in the order
the patches should be applied (applicable to EEBs).
Leave blank and press Enter if there are no patches to apply.
Next, respond to the following prompt:

Are there any more patches that need to be applied? (Yn) :

Answer yes (y) to apply additional patches. Answer no (n) to continue with
the disaster recovery process.
10 Type the storage pool ID for the storage pool that you want to restore.
This name is the value specified for the storagepoolid property in the
topology.ini file. This value is used to retrieve the topology file.

For example:

Please enter Storage Pool ID:

11 Respond to the prompts regarding the topology file.


Directory /Storage/etc must contain the following files:
■ topology.ini or topology.ini.enc

■ topology_nodes.ini

If these files are not present, the script retrieves them. If topology.ini.enc
is present, the script issues the following prompt for the password:

topology.ini file needs to be decrypted before proceeding


enter aes-256-cbc decryption password:

Type the password that you use for the storage pool configuration wizard.
12 Observe the messages that the script produces and take one of the following
actions:
Disaster recovery for clustered storage pools 195
Running the DR_Restore_all_script to recover the data

■ If the host and service mappings are synchronized properly with the
topology on the storage pool, the script continues. Proceed to the following
section:
See “Running the DR_Restore_all script - phase 2 (third-party, clustered
recovery)” on page 196.
■ If the host and service mappings are not synchronized with the topology
on the storage pool, the script issues the following message and stops:

WARNING: You are running in a VCS environment. This means the topology_nodes.ini file
that has just been restored may be out of date. VCS failover events could have changed
the physical - service address mapping for nodes between the time the DR backup last ran
and now.
To verify these mappings, please run /opt/pdinstall/edit_topology.sh and select option
"Edit a node" to edit all PureDisk nodes and spare nodes in your topology. Verify that
for PureDisk nodes, the service address in the "Virtual IP/Hostname" entry is on the
same node as the physical address in the "IP/Hostname" entry. If not, update the
"IP/Hostname" entry to contain the correct physical address for the service address.
Verify that for spare nodes, the "IP/Hostname" entry is really the physical address of
a node that is currently acting as a spare node.
Also, select the option "Configure root broker" and verify that the root broker mapping
is correct.

Once you verified the physical - service address mapping is correct for all nodes, and
the root broker mapping, please run this script again.

Note: As the preceding warning message explains, at this time, it is


important to run /opt/pdinstall/edit_topology.sh to start the topology
editor. In the topology editor, for each node, select Edit a node, and click
OK. This action creates the required VCS configuration file on each node.

For example, failovers might have occurred between the time of the last
disaster recovery backup and this restore. If so, the restore topology files
have invalid host and service address mappings for the nodes of the storage
pool.
Correct these inconsistencies and run the DR_Restore_all.sh script again.

13 (Conditional) Verify the NIC identifiers and (conditionally) correct the NIC
identifiers.
Perform this step if you reinstalled PDOS on any nodes.
When you reinstall PDOS, the NIC identifiers can be different from the NIC
identifiers that existed in the previous PDOS installation.
196 Disaster recovery for clustered storage pools
Running the DR_Restore_all_script to recover the data

Perform the following steps to verify and, if necessary, correct the NIC
identifiers:
■ Log into the storage pool authority node and type the following command
to start the topology editor:

# /opt/pdinstall/edit_topology.sh

Use the topology editor to check and, if necessary, correct the NIC
identifier for the public NIC. The topology editor displays information
about the public NIC below the service addresses. You can change
information about the public NIC in the topology editor.
■ Open file /Storage/etc/topology_nodes.ini.
■ Search for the following keywords: firstprivate and secondprivate.
■ Verify that the ethn identifiers for firstprivate and secondprivate
point to the correct NICs.
■ (Conditional) Correct the ethn identifiers in the topology_nodes.ini file.
Perform this step if the ethn identifiers differ from those that existed
when PDOS was installed initially.

Running the DR_Restore_all script - phase 2 (third-party,


clustered recovery)
In phase 2, the script restores the files from the backup. The DR_Restore_all.sh
script initiates a dialog with you. You need to answer the questions that the script
displays.
To restore a storage pool
1 Examine the topology information that the script displays and specify the
nodes you want to restore.
The script reads the topology file and presents a display like the following
example:

STORAGE POOL TOPOLOGY


Node IP Address Services
---- ---------- ---------
1 10.80.62.1 spa mbe mbs cr nbu
2 10.80.62.2 cr
Node number(s): 1

The preceding example shows the topology that the script can restore in this
disaster recovery operation. Examine this information for accuracy, and
specify all nodes for restore. Use commas to separate the node numbers.
Disaster recovery for clustered storage pools 197
Running the DR_Restore_all_script to recover the data

After the reinstall is complete, the script performs the following actions:
■ Remounts the mount points.
■ Restores all the data on the failed nodes.
■ Restores all the databases on all the nodes.
■ Restores the removed data. A data removal job might have been run since
the last time the databases were backed up. In that case, the script also
restores the removed data on the nodes that did not fail. This method
synchronizes the databases and data.

2 Respond to queries from the script regarding passwords.


The restore process can take hours to complete. As the script runs, you might
be asked to specify the system passwords for the remote (or local) nodes. This
prompt occurs early in the process during SSH authentication.
3 (Conditional) Respond to the queries that are displayed by the upgrade
patches.
Refer to the README files that came with the upgrade patches for specific
details about the queries.
Respond no to creating jobs for upgrading agents.
At the end of the upgrade installation, the script prompts you to encrypt the
topology.ini file. Answer no at this time to continue. You will have a chance
to encrypt the topology.ini file at the end of the disaster recovery process.
4 When the restore is complete, answer the prompts about encryption of the
topology.ini file.

For example:

Would you like to encrypt the topology.ini file? [Yn]:y


Encrypting /opt/pdinstall/topology.ini
enter aes-256-cbc encryption password: xxxx
Verifying - enter aes-256-cbc encryption password:

Type the password that you use for the storage pool configuration wizard.
5 Observe the completion message.
When the operation completes successfully, the script displays the following
message:

Disaster recovery complete


198 Disaster recovery for clustered storage pools
Running the DR_Restore_all_script to recover the data

Finishing the restore (third-party, clustered recovery)


The following procedure explains the tasks you need to perform to finish the
restore.
To finish the restore
1 On each content router node, log on as root.
2 Run the following command to change DEREF to Yes.

# /opt/pdcr/bin/crcontrol -m DEREF=Yes

3 Repeat the following steps until you have run this command on all content
router nodes:
■ Step 1
■ Step 2

4 (Conditional) Run the following script on the storage pool authority node to
upgrade the security protocol:

# /opt/pdinstall/disable_sslv2.sh

Perform this step if you ran the disable_sslv2.sh script on this storage pool
at any time. The disaster recovery restore does not enable this script
automatically.
Symantec recommends that you run the script unless PureDisk 6.5.x storage
pools need to replicate to this storage pool.
5 (Conditional) Reenable the NetBackup export engine on any nodes that hosted
only a NetBackup export engine service.
Perform this step only if you have a node that hosted only a NetBackup export
engine service.
For information about how to enable a NetBackup export engine, see the
following:
See “About exporting data to NetBackup” on page 73.
6 Perform a full disaster recovery backup.
Make sure you perform a full disaster recovery backup before you perform
any file backups or perform any incremental disaster recovery backups.
Chapter 8
Storage pool authority
replication (SPAR)
This chapter includes the following topics:

■ About storage pool authority replication (SPAR)

■ Activating the local storage pool

■ Enabling SPAR backups

■ Running a SPAR policy manually

■ Restoring from a SPAR backup

About storage pool authority replication (SPAR)


PureDisk enables you to replicate storage pool authority configuration information
from an all-in-one local storage pool to a main storage pool. This type of replication
is called storage pool authority replication (SPAR).
You can enable both SPAR and disaster recovery backups. If you enable both, you
can choose the disaster recovery method you want to use. For more information
about when to use SPAR and when to use disaster recovery backup, see the
following:
■ See “Disaster recovery strategies” on page 201.
■ See “About disaster recovery backup procedures” on page 99.
200 Storage pool authority replication (SPAR)
About storage pool authority replication (SPAR)

Note: The main storage pool can be configured as a clustered storage pool.
However, Symantec does not support SPAR for clustered local storage pools. When
SPAR runs under cluster control, a failover moves all node functions to a passive
node. However, the failover does not move the SPAR feature that you enabled on
the original local storage pool authority node.

Figure 8-1 shows an example PureDisk environment with two storage pools.

Figure 8-1 SPAR example

SPAR backup
SP_main
SP_local
PureDisk agent SPAR restore

SP_local is a small, local storage pool in Duluth and SP_main is in a main office
in Minneapolis. SPAR is implemented to back up system information from
SP_local to SP_main. The information in this section uses this example
environment.
SPAR’s main benefit is that SPAR enables you to restart a storage pool and begin
backing up data soon after a disaster.
A SPAR recovery is best performed in the following circumstance:
■ You have an all-in-one local storage pool that is down completely.
■ You want to restore all your user information, data selection definitions, backup
policies, and system policies. This data includes all the user data and storage
pool data that enables client backups. This data does not include the backup
data or backup metadata.
■ You want to be able to start backing up data again very quickly.
SPAR differs from the other disaster recovery methods because SPAR does not
recover your backup data or metadata. A full disaster recovery can take several
hours or days, depending on how much data you backed up. SPAR recoveries are
faster. After a SPAR recovery, PureDisk sees the local storage pool as if it were a
newly configured storage pool. The backups you perform immediately after a
SPAR recovery are all full backups.
When you enable both comprehensive disaster recovery backups and SPAR, you
can choose the recovery method you want to use. If you perform a SPAR recovery,
you can use full disaster recovery methods to restore your file data and metadata
Storage pool authority replication (SPAR) 201
About storage pool authority replication (SPAR)

to an alternate storage pool. You need an alternate storage pool in this


circumstance. A full disaster recovery to a local storage pool destroys all the data
that you backed up between the SPAR recovery and the time you performed the
full disaster recovery.

Note: If you experience replication job performance degradation and you have a
high-latency communication network between the two storage pools, you can
possibly improve performance by changing some default TCP/IP settings. For
more information, see "About changing TCP/IP settings to improve replication
job performance" in the PureDisk Best Practices Guide, Chapter 5: Tuning PureDisk.

Disaster recovery strategies


Symantec recommends that you back up your PureDisk environment on a regular
basis.
PureDisk supports the following methods for disaster recovery:
■ Complete disaster recovery.
This method uses a PureDisk policy that sends data to NetBackup or scripts
to back up and restore storage pool specifications, file data, and file metadata.
For more information about complete disaster recovery, see the following:
See “About disaster recovery backup procedures” on page 99.
■ Storage pool authority replication (SPAR).
This method backs up and restores storage pool specifications, but not file
data or metadata.
Along with disaster recovery backups, you can replicate all your data selections
to another storage pool. You can use the replicated data selections to restore a
PureDisk storage pool. If you replicate all data selections to another storage pool,
you can limit data loss after a disaster.
For information about data replication, see the following:
See “About data replication” on page 59.
The method that is best for your site depends on your configuration, practices,
and disaster recovery goals.
Table 8-1 shows the characteristics of these two methods.
202 Storage pool authority replication (SPAR)
Activating the local storage pool

Table 8-1 Disaster recovery method comparison

Characteristic Complete disaster recovery SPAR

Data restored Storage pool metadata, file data, Storage pool metadata.
and file metadata.

Estimated restore Depends on the amount of data. SPAR restores take much less time
time This step can take hours. than a complete disaster recovery.

Storage pool type Any type of storage pool. The protected storage pool must
be an all-in-one, single-node,
unclustered storage pool.

Restore goal You want to restore the storage You want to restore the storage
pool and all backups. pool and back up the clients as
quickly as possible.

State of restored Restores your storage pool to the Restores your storage pool users,
storage pool state it was in when the last accounts, data selections, policies,
disaster recovery backup was run. and all other storage pool
configuration information. This
method does not restore any file
data or file metadata. After you
restore, you need to run backups
for all your clients. Old or changed
data is no longer available.

Activating the local storage pool


To implement the SPAR method successfully, the local storage pool must be a
client to the main storage pool.
The following procedure explains how to register the local storage pool as the
main storage pool’s client. This procedure also configures the other aspects of
SPAR.
To register the local storage pool
1 In the local storage pool’s Web UI, click Settings > Topology.
2 In the left pane, select a local storage pool.
3 In the right pane, click Activate SPA Replication.
Storage pool authority replication (SPAR) 203
Activating the local storage pool

4 Complete the display that appears.


The display requests the same information that you need to supply when you
install an agent on the storage pool authority. These fields are as follows:

Login The root user’s login on the main storage pool authority
node.

Password The root user’s password on the main storage pool


authority node.

Host name (FQDN) The fully qualified domain name (FQDN) of the main
storage pool.

Example:

SP_Main.acme.com

Storage pool name The local storage pool’s host name. Type this name as you
want it to appear in the main storage pool’s Web UI.

Binary Location The path to the Linux agent software on the main storage
pool. This agent software is the same as the agent software
that you use for all other Linux PureDisk clients. You can
specify an IP address or a host name to identify the main
storage pool.

In this field, do not specify the actual file in which the Linux
agent installation software resides. You specify the file
name in the next field, Binary.

Example:

Assume that you installed the agent software packages in


the default location on SP_Main. The default location is
as follows:

/opt/pdweb/htdocs/download/Linux_Clients

For the Binary Location field, use the fully qualified


domain name (FQDN) and specify the following path:

https://SP_Main.acme.com/download/Linux_Clients

Binary The name of the file that includes the Linux agent
installation software on the main storage pool. The Binary
Location field’s content points to this file name. For
example, pdagent-Linux_2.6_x86-6.2.0.5.run.
204 Storage pool authority replication (SPAR)
Enabling SPAR backups

Path The path to which you want to install the Linux agent on
the local storage pool. For example, /opt/SPAR.
Caution: Do not specify the path to the primary server
agent on the local storage pool. For example, if the server
agent is in its default location, do not specify /opt in this
field. This path is the location of the primary server agent
on the local storage pool. Do not overwrite this file.

Dump Path The path to a dump directory on the local storage pool.
PureDisk writes the local storage pool’s system information
to this location before it copies the system information to
the main storage pool. Do not specify an existing directory.
Specify only a unique directory that SPAR can use
exclusively.

Each time the SPAR policy runs, it overwrites this directory.


When you perform a SPAR recovery, it restores the last
version written.

For example, /Storage/SPAR.

5 Record the information you specified and keep it in a safe place.


If the local storage pool goes down, you need the information you specified
on this screen to perform a restore. All the arguments to these fields appear
on the main storage pool, except for the Path field. If a disaster occurs, you
can restore a downed local storage pool faster if you have this information
recorded and stored safely.
6 Click Save.
After you click Save, PureDisk runs a job to activate the local storage pool as
a client to the main storage pool.
After activation, the status of the two storage pools is as follows:
■ The local storage pool appears in the main storage pool’s list of clients.
■ Two agents reside on the local storage pool. The first is the local storage
pool’s primary server agent. The second is an agent that connects the
local storage pool to the main storage pool.

Enabling SPAR backups


PureDisk includes a system policy for SPAR. The following procedure explains
how to enable this policy. PureDisk creates the jobs to run this policy on the local
site.
Storage pool authority replication (SPAR) 205
Enabling SPAR backups

To enable SPAR backups


1 On the local storage pool, click Manage > Policies.
2 In the right pane, under the Storage Pool Management Policies category,
click the plus (+) sign to the left of SPA Replication.
3 Select System policy for SPA replication.
4 Complete the required information in the General tab.
See “Completing the General tab for a SPA Replication” on page 205.
5 Complete the required information in the Scheduling tab.
See “Completing the Scheduling tab for a SPA Replication policy” on page 206.
6 Complete the required information in the Parameters tab.
See “Completing the Parameters tab for a SPA Replication policy” on page 206.
7 Click Save.

Completing the General tab for a SPA Replication


This tab lets you name and define the policy.
To complete the General tab
1 (Optional) Type a new name for this policy in the Name field.
You do not have to rename this policy.
2 Select Enabled or Disabled.
This setting has the following options:
■ If you select Enabled, PureDisk runs the policy according to the schedule
in the Scheduling tab.
■ If you select Disabled, PureDisk does not run the policy according to the
schedule in the Scheduling tab. This selection is the default.
You might use Disabled to stop this policy from running this policy during
a system maintenance period. Then, you would not need to enter
206 Storage pool authority replication (SPAR)
Running a SPAR policy manually

information in the Scheduling tab to first suspend and later reenable this
policy.

3 Select times in the Escalate warning after or the Escalate error and terminate
after drop-down boxes. These times specify the elapsed time before PureDisk
sends a message.
PureDisk can notify you if a policy does not complete its run within a specified
time. For example, you can configure PureDisk to send an email message to
an administrator if a policy does not complete in an hour.
If you select either of these options, create a policy escalation action. The
action defines the email message, defines its recipients, and associates the
escalation action with the policy. For more information about policy
escalations, see the PureDisk Backup Operator's Guide.

Completing the Scheduling tab for a SPA Replication policy


From this tab, use the drop-down lists to specify when the policy is to run.
To specify the schedule
◆ Specify the schedule details that define how frequently you want the policy
to run.

Completing the Parameters tab for a SPA Replication policy


Examine the information on this tab and correct it if necessary.
To examine the parameters tab
1 (Conditional) Correct the information in these fields. Perform this step only
if the fields contain incorrect information.
For example, if the URL, login, or password for the main storage pool ever
change, correct the information in this tab.
2 Click Save.

Running a SPAR policy manually


You can run a SPAR policy according to a schedule. You can also use the following
procedure to run a SPAR policy manually.
Storage pool authority replication (SPAR) 207
Restoring from a SPAR backup

To run a SPAR policy


1 On the local storage pool, click Manage > Policies.
2 In the left pane, under Storage Pool Management Policies, click the plus (+)
sign to the left of SPA Replication.
3 Select system policy for spa replication.
4 (Conditional) Enable the policy.
You have to enable a policy before you can run it. In the right pane, click
Enabled and Save.
5 In the right pane, click Run Policy.
6 Examine the output.

Restoring from a SPAR backup


The RestoreSPAAIO.php command restores system data from a SPAR backup and
re-establishes the client connection between the local storage pool and the main
storage pool.
To restore system data from a SPAR backup
1 Gather the information you need to perform a SPAR restore.

2 Install the PureDisk Operating System (PDOS) on the local storage pool.
For more information about how to install PDOS, see the PureDisk Storage
Pool Installation Guide.
3 Use the storage pool configuration wizard to configure the PureDisk storage
pool.
Configure the storage pool software on the local storage pool. For more
information about how to use the storage pool configuration wizard, see the
PureDisk Storage Pool Installation Guide
Perform the following steps if the storage pool software does not function:
■ Remove the storage pool software’s previous upgrade package.
The following is the directory you need to remove:

/etc/puredisk

■ Configure new storage pool software.


During this reconfiguration, the installer proposes again a (new) random
storage pool ID. Do not accept this proposed storage pool ID. Instead,
specify the original storage pool ID. If you do not specify the original
208 Storage pool authority replication (SPAR)
Restoring from a SPAR backup

storage pool ID, the storage pool becomes inoperable after you perform
the SPAR restore.
When you configure the new storage pool software, specify the same
passwords that you specified during the previous configuration.

4 Deactivate the agent in the main storage pool that performed the SPAR.
Perform the following steps:
■ Log in to the main storage pool’s Web UI.
■ In the left pane, click Settings > Topology.
■ Select the PureDisk agent that represents the PureDisk storage pool.
■ In the right pane, click Deactivate Agent.

5 Retrieve the information you used to configure SPAR initially.


See “To register the local storage pool” on page 202.
This procedure advises you to record the configuration information and store
it in a safe place. Retrieve this information now. You need this information
and some additional information to create the restore command.
A later step in this procedure directs you to use the RestoreSPAAIO.php
command to perform the restore. You can obtain information about many of
this command’s arguments from the main storage pool. Other information,
however, such as the install path to the SPAR client on the local storage pool
is not recorded anywhere in PureDisk. If you have this information before
you begin, the restore command is easier to specify.
Storage pool authority replication (SPAR) 209
Restoring from a SPAR backup

6 Log in to the local storage pool as root.


7 Use the RestoreSPAAIO.php command to perform the restore.
The restore command restores the local storage pool’s system information
and re-establishes the local storage pool’s client relationship to the main
storage pool.
Refer to Table 8-2.
The figure shows the arguments to the RestoreSPAAIO.php command. You
can type the arguments in any order, but the arguments must match your
original configuration.
The following example shows the command with all the required arguments:

# /opt/pdag/bin/php /opt/pdspa/cli/RestoreSPAAIO.php --ip SP_Main.acme.com


--login root --password root --hostname SP_local-Duluth
--binary pdagent-Linux_2.6_x86-6.2.0.5.run
--binaryloc https://SP_Main.acme.com/download/Linux_Clients/
--agentlocation /opt/SPAR --agentid 2 --dsid 2 --dumpdir /Storage/SPAR/

About the RestoreSPASIO command


The following sections explain the arguments to the RestoreSPAAIO.php command.

Required arguments for the RestoreSPAAIO command


Table 8-2 shows the required arguments for the RestoreSPAAIO.php command.

Table 8-2 Required RestoreSPAAIO.php command arguments

Argument Meaning

--agentid Local storage pool authority’s agent ID as registered on the


main storage pool. To obtain this information, perform the
following steps on the main storage pool:

1 Click Manage > Agent.

2 In the right pane, select the local storage pool’s agent


icon.

3 Note the number in the Storage Pool ID field.

--agentlocation Full path to the directory in which the agent resided for the
previous SPAR backup. You specified this information in the
Path field when you configured SPAR.
210 Storage pool authority replication (SPAR)
Restoring from a SPAR backup

Table 8-2 Required RestoreSPAAIO.php command arguments (continued)

Argument Meaning

--binary File name for agent installer on the main storage pool. You
specified this information in the Binary field when you
configured SPAR.

--binaryloc Path to the agent installer on the main storage pool. Do not
include the file name at the end of this path. You specified
this information in the Binary Location field when you
configured SPAR.

--dsid Data selection ID (DSID) of the data selection that PureDisk


used to do the previous SPAR backup. To obtain this
information, perform the following steps on the main storage
pool:

1 Click Manage > Agent.

2 In the left pane, click the plus sign (+) to the left of the
local storage pool’s agent icon.

3 Select the SPAR data selection.

4 Note the number in the ID field.

--dumpdir Full path to the restore directory. Specify the same dump
directory that you used for the previous SPAR backup.

--hostname Host name of the local storage pool as it appeared in the main
storage pool’s Web UI for the previous SPAR backup.

--ip FQDN, host name, or IP address of the main storage pool.


Specify a resolvable identifier.

--login Root user login to the main storage pool.

--password Root user password for the main storage pool.

-d or --debug (Optional) Runs command in debug mode.

-s or --silent (Optional) Runs command in silent mode.

-v or --verbose (Optional) Runs command in verbose mode.

Optional arguments for the RestoreSPAAIO command


Table 8-3 shows the optional arguments to the RestoreSPAAIO.php command.
Storage pool authority replication (SPAR) 211
Restoring from a SPAR backup

Table 8-3 Optional RestoreSPAAIO.php command arguments

Parameter Argument

--info (Optional) Displays PHP information at run time.

--help (Optional) Displays command help information.

Upgrading PureDisk with SPAR enabled


You must follow a strict order during the upgrade process if your environment
includes two or more storage pools with SPAR enabled between them.
The order is as follows:
■ First, upgrade the storage pool from which you replicate the storage pool
authority.
■ Second, upgrade the storage pool to which you replicate the storage pool
authority.
For more information about upgrades, see the PureDisk Storage Pool Installation
Guide.
212 Storage pool authority replication (SPAR)
Restoring from a SPAR backup
Chapter 9
Reports
This chapter includes the following topics:

■ About reports

■ Permissions and guidelines for running and viewing reports

■ Reports for a running job

■ About policies and workflows

■ Obtaining detailed job reports

■ About Data mining reports

■ Enabling a data mining policy

■ Running a data mining policy manually

■ Obtaining data mining policy output - the data mining report

■ Obtaining data mining policy output - the Web service report

■ Web service reports

■ About Dashboard reports

■ Central storage pool authority reports

About reports
The following explain how to run and display PureDisk reports:
■ See “Permissions and guidelines for running and viewing reports” on page 214.
■ See “Reports for a running job” on page 215.
■ See “About policies and workflows” on page 216.
214 Reports
Permissions and guidelines for running and viewing reports

■ See “Obtaining detailed job reports” on page 218.


■ See “About Data mining reports” on page 233.
■ See “Enabling a data mining policy” on page 234.
■ See “Running a data mining policy manually” on page 236.
■ See “Obtaining data mining policy output - the data mining report” on page 236.
■ See “Obtaining data mining policy output - the Web service report” on page 239.
■ See “Web service reports” on page 242.
■ See “About Dashboard reports” on page 249.
■ See “Central storage pool authority reports” on page 254.

Permissions and guidelines for running and viewing


reports
The following factors determine whether a user can create reports and view report
results with the permissions that they possess:
■ Permission to create reports. A user needs Report permission and View
permission to create reports.
■ A user in the reporters group and a user with Report permission can run
reports and can run data mining policies. You can assign users to the
reporters group only at the World level.
■ A user needs View permission at the storage pool level to retrieve data
mining reports.

■ Permission to view reports. View permission determines how much information


that user can see in a data mining report.
For example, users with View permission at the storage pool level can view
information for the entire storage pool in data mining reports. If a user has
View permission only at the client level, they can see only that client's
information in a data mining report.
■ Permission to create reports about central storage pools. A user needs Central
Report permission at the storage pool level.
■ Root user permission. If a user logs in as root, PureDisk displays the Reports
tab. Only root users can view the reports displayed from this tab.
Other factors can affect the availability of reporting data. For example, if you
restored a storage pool, wait about 15 minutes before accessing report data.
Reports 215
Reports for a running job

For more information about permissions, see the PureDisk Client Installation
Guide.

Reports for a running job


When you examine a running job, you can see the steps that PureDisk takes when
it runs. These individual steps are called workflow steps. The following sections
show how to examine a running job and explain the workflow steps.
For information about job reports for PureDisk deduplication jobs, see the PureDisk
Deduplication Option Guide.
For information about examining a running job or restarting a job, see the
following:
■ See “Examining a running job” on page 215.
■ See “Restarting a backup job” on page 216.

Examining a running job


The following procedure explains how to view job step information for running
jobs and for jobs that are in the queue.
To obtain a report on a running or queued job
1 Click Manage > Agents.
2 In the left pane, select the storage pool.
3 In the right pane, select More Tasks and pull down Job Steps Report.
PureDisk displays the Job Steps Report.
4 Click one of the tabs to see different aspects of a job’s progress.
For example, click Job Steps with Problems to see job steps. Click one of the
numbers in the Job ID column to display more details about a specific job.
For more information about the job details displays, see the following:
See “Obtaining detailed job reports” on page 218.
If a data lock password is enabled on this agent, the Files and Errors tabs
prompt you for the password when you attempt to view them.
For more information about the data lock password, see the PureDisk Client
Installation Guide.
216 Reports
About policies and workflows

Restarting a backup job


If a backup job fails or if you abort a job, you can use the following procedure to
restart the job.
To restart a job
1 Click Monitor > Jobs.
2 (Optional) Narrow your search for the job you are interested in.
If there are many jobs in the right pane, you can narrow your search through
one of the following:
■ Specify information in the Look for field, pull down an object type from
the in field, and click Find now to display only a subset of the default
information.
You can specify a string of characters to search for in the Look for field.
For example, you can specify the full name of a client in the Look for field,
and then you can select Agent Name from the in pull-down menu.
■ In the left pane, select a category from the View Jobs By pull-down menu.

3 Near the top of the right pane, click Restart job.

About policies and workflows


When you click Manage > Policies, the right pane of the Web UI displays backup
jobs, restore jobs, and other jobs. The following describe policies and workflows:
■ See “Types of workflows” on page 216.
■ See “Workflows in policies” on page 217.

Types of workflows
A workflow is a collection of steps that PureDisk completes to accomplish a task.
A policy is a special kind of workflow. To create a policy manually, or to edit a
policy, click Manage > Policies. The PureDisk Web UI categorizes policies and
workflows as follows:
■ Backup Policies
■ Data Management Policies
■ Storage Pool Management Policies
■ Restore Workflows
■ Miscellaneous Workflows
Reports 217
About policies and workflows

If you upgraded from a previous PureDisk release, you might also see the Legacy
Workflows category with one or more workflows beneath it. For example, this
category might contain the following workflows:
■ 6.5 Data selection removal workflow
■ 6.5 Rerouting Workflow
■ 6.5 MBDataMining workflow
Whether the Web UI displays any legacy workflows depends on the presence of
existing workflows at the time of your upgrade. If you ran a data mining policy
before you applied an upgrade, the workflow appears in the Web UI after the
upgrade is installed. You can examine the outcomes of these workflows, or you
can delete them.

Workflows in policies
A workflow step defines a PureDisk action. PureDisk accomplishes its work by
running a series of workflow steps. The individual workflow steps are predefined,
and each performs a specific action. When you use PureDisk to perform a backup,
a restore, or any other kind of task, PureDisk completes that task by running
several workflows.
A policy defines a data management or maintenance action. Within a backup
policy, for example, the schedule determines when the policy runs, the agents
and the data selections to back up, and other various parameters. PureDisk can
stop processing after a timeout .
A timeout can occur in two different ways:
■ In a workflow step. PureDisk permits internal workflow steps to run only for
a limited time.
■ In a policy. The General tab of a backup policy lets you specify the amount of
time a policy can run before PureDisk terminates the policy run.
PureDisk’s internal watchdog monitors workflow steps. In the case of a backup
policy, the watchdog issues a message if the backup does not complete within the
specified backup window. The watchdog also issues messages for individual
workflow steps or policies that terminate. You can configure event monitoring to
notify you of these occurrences. For more information about how to configure
events, see the PureDisk Backup Operator’s Guide.
218 Reports
Obtaining detailed job reports

Obtaining detailed job reports


The following procedure explains how to obtain a detailed job report. This report
returns information on a per job basis. By analyzing this report, you can determine
how efficiently PureDisk operates in your environment.
To obtain a report for a job that has finished
1 Click Monitor > Jobs.
2 (Optional) Specify information in the Look for field, pull down an object type
from the in field, and click Find now to display only a subset of the default
information.
You can specify a string of characters to search for in the Look for field. For
example, you can specify the full name of a client in the Look for field, and
then you can select Agent Name from the in pull-down menu.
3 In the right pane, in the Job ID column, click the number that corresponds
to the job you want to examine.
An informational window appears with the several tabs. For example, the
pop-up window includes the following tabs for a backup job:

General Includes the job’s execution status, whether there were any errors
during the job’s run, and when the job commenced.

Details Shows the status for each specific part of a job’s run. On this tab,
you can see how PureDisk breaks a job apart for processing.

Statistics Provides information on the number of processed files and the


number of bytes of data that was transferred between the client
and the content router.

Files Lists the files that the job backed up. Includes whether PureDisk
backed up the files successfully, the client upon which the file
resided, and the name of the file.

Errors Lists the files with the errors that PureDisk encountered when
it processed the job.

Job log Shows the job step output.

4 (Optional) Click one of the following tabs to perform additional actions:


■ Restart job
■ Stop job gracefully
■ Stop job immediately
Reports 219
Obtaining detailed job reports

■ Delete job

General tab for a Job Details report


The General tab summarizes a job’s activity. It includes information about whether
the job completed successfully, the number of errors, the start time, and the finish
time.

Details tab for a Job Details report


The Details tab shows the job steps PureDisk performed to complete the job. Select
a workflow step in the left column to view information about that step in the right
column.
Some job steps generate more information than PureDisk can display in the right
pane of the Details tab.
To control the amount of information that appears in the right column, click the
drop-down menu that appears directly above the upper-left corner of the right
column. Then, specify a different amount of information to display.
If the information in the right column exceeds the space allowed, PureDisk writes
the following message at the end of the output:
PureDisk truncated this log file. You can download the complete log
file from the Job log tab.

If you see this message, perform the procedure in the following section:
See “Examining lengthy job logs” on page 233.

Statistics tab for a Job Details report


PureDisk provides information on this tab for backup jobs, restore jobs, replication
jobs, and PDDO jobs.

Statistics for a backup job


The statistics for a backup job pertain to all supported file types that were included
in the backup. The PureDisk Backup Operator’s Guide lists the file types that
PureDisk supports.
The Statistics tab for a backup job does not show information about unsupported
files or files that cannot be backed up at all. Unsupported file types are files that
are not supported by PureDisk, such as reparse points on Windows. Examples of
the file types that cannot be backed up are doors and sockets on UNIX systems.
220 Reports
Obtaining detailed job reports

For efficiency reasons, PureDisk always uploads files smaller than 16 KB to the
content router, even if they are already stored on the content router. Consequently,
the backup statistics can be different from what you expect if you back up many
files smaller than 16 KB. For example, the data reduction factor can be lower than
expected, or the number of bytes transferred can be higher than expected.
Table 9-1 contains information about how to interpret the statistics in a backup
job.

Table 9-1 Lines in the Statistics tab for a backup job

Statistic or heading Meaning

Data Reduction:

Global data reduction The percentage of source data bytes that did not have to be
savings transmitted to the content routers because of data reduction.
Higher numbers correlate to more efficiency.

Global data reduction The total number of bytes for the files that PureDisk backed up
factor divided by the amount of bytes transferred to the content
routers. Higher numbers correlate to more efficiency.

Data Uniqueness:

Unique files and folders The number of backed up files that were globally unique, after
backed up global data reduction, before segmentation, and before
compression.

This statistic is the number of files that are unique in the group
of data selections under consideration. The files themselves are
considered, but optimization through segmentation is not
considered. For example, if a file resides on three different
clients, PureDisk stores the file only once and counts it only
once in this number. At the segment level, however, PureDisk
performs more optimization. A file segment can be present in
more than one file, and PureDisk stores that segment only once.

Unique bytes backed up The total number of bytes in the backed up files that were
globally unique.

This statistic is the accumulated size of the unique files


transferred to the content routers. When encryption or
compression are enabled, it is the accumulated size of the
encrypted or compressed unique files. The reported value also
includes all overhead bytes necessary for headers, alignment,
and so on. The values in the Source bytes backed up and Unique
bytes backed up fields are not always identical even if all files
backed up are unique.
Reports 221
Obtaining detailed job reports

Table 9-1 Lines in the Statistics tab for a backup job (continued)

Statistic or heading Meaning

Source selection:

Files selected on source The number of files that meet the data selection inclusion and
exclusion rules. Pertains to regular files only. This number does
not include the number of special files, such as symbolic links
or device special files.

Bytes selected on source The total number of bytes for the files that meet the data
selection inclusion and exclusion rules. Pertains to regular files
only. This number does not include the volume of special files,
such as symbolic links or device special files.

Files new on source The number of selected files that are new compared to the
previous backup run. Pertains to regular files only. This number
does not include the number of special files, such as symbolic
links or device special files.

Bytes new on source The total number of bytes for the selected files that are new
compared to the previous backup run. Pertains to regular files
only. This number does not include the volume of special files,
such as symbolic links or device special files.

Files modified on source The number of selected files that were modified compared to
the previous backup run. Pertains to regular files only. This
number does not include the number of special files, such as
symbolic links or device special files.

Bytes modified on source The total number of bytes for the selected files that were
modified compared to the previous backup run. This number
does not include the volume of special files, such as symbolic
links or device special files.

Files not modified on The number of files that were not modified since the last backup
source ran.

Bytes not modified on The total number of bytes for the files that remained unchanged
source since the last backup ran.

Files deleted on source The number of files that were deleted since the last backup ran.

Bytes deleted on source The total number of bytes for the files that were deleted since
the last backup ran.

Network:
222 Reports
Obtaining detailed job reports

Table 9-1 Lines in the Statistics tab for a backup job (continued)

Statistic or heading Meaning

Backup speed The rate at which PureDisk backed up the total volume of source
data. If only a small amount of unique data needs to be backed
up, this number is higher. If the source data has never been
backed up to PureDisk before, the number is lower.

Bytes transferred The total number of bytes of unique data that were transferred
to the storage pool’s content routers after segmentation and
compression. Includes data related to special files. For special
files, PureDisk stores a special data object on the content routers
to be able to restore these files.

Protected Data:

Source files backed up The number of selected files that were backed up correctly. This
is the sum of new, modified, and nonmodified files that are
correctly backed up and do not contain errors.

Source bytes backed up The total number of bytes for the selected files that were backed
up correctly.

For information about the relationship of this field to the Unique


bytes backed up field, see the Unique bytes backed up field
description.

Source files with errors The number of selected files that PureDisk could not back up.

Source bytes with errors The total number of bytes for the selected files that PureDisk
could not back up.

Time:

Start date/time The date and time that the job started.

Stop date/time The date and time that the job ended.

Backup time duration The amount of time that elapsed between when the job started
and when the job ended.

Notes:
■ Table 9-1 shows the statistics for one job. However, the data mining reports,
when run at the storage pool level, show the data reduction factor for the
storage pool. The storage pool data reduction factor in the data mining reports
represents the volume of all data ever backed up to that storage pool, in bytes,
that is retained and currently available for restores versus the amount of bytes
consumed on the content routers.
Reports 223
Obtaining detailed job reports

The storage pool data reduction factor differs from the statistics because the
statistics in the table are generated for only one job.
More information is available about the data mining reports.
See “About Data mining reports” on page 233.
■ Several factors can affect the Bytes transferred statistic. The data selection
may contain a huge number of small files or have very small segment sizes.
In these cases, the bytes transferred can be much larger than the on-source
values.
The following additional information applies to this statistic:
■ If the backup includes only special files, the "...on source" statistics show
0 files selected because there were no regular files to back up, but the Bytes
transferred statistic can be a large number.
■ Compression has the greatest effect on the Bytes transferred statistic. If
you enable compression, the Bytes transferred statistic is usually lower
than the Bytes selected on source statistic. The Bytes transferred statistic
might be higher if the data being transferred cannot be compressed. Data
that cannot be compressed includes data that is already compressed such
as movies, files in JPEG format, files in MP3 format, or files in ZIP format.
For files that are already compressed, the compression is ineffective and
might result in a slight increase of the data to be transferred.
■ For a repeated backup, the Bytes transferred statistic should be much
lower than Bytes selected on source. The Bytes selected on source statistic
is the sum of all bytes present in the data selection. For an initial backup,
if you disable compression, the Bytes transferred statistic is usually higher
than the Bytes selected on source statistic because of the overhead in the
internal data format.
■ The rate of data change on the client affects the Bytes transferred statistic.
The Bytes selected on source represents the sum of all bytes in the entire
data selection. If the data change rate is 100% (for example, if all files
changed or it is a first-time backup) and you disable compression, the Bytes
transferred statistic is always higher. If the change rate is less than 100%,
Bytes transferred statistic is lower.
■ The file size affects the Bytes transferred statistic. As a performance
enhancement, PureDisk always transfers files for which the content of files
is smaller than the segment size. In this case, PureDisk does not perform
a prior-existence check on the content routers. This is in contrast to backups
for file content that is larger than the segment size. PureDisk always
performs a prior-existence check for files that are larger than the segment
size.
224 Reports
Obtaining detailed job reports

Consequently, if a data selection consists mainly of files smaller than the


segment size, the number of bytes transferred can be higher than expected.
For example, if PureDisk backs up an identical set of files on two different
clients, it would be logical to expect that the bytes transferred would be
low for the second client because the files already exist on the content
router. This is not the case, however.
■ Segmentation size and file size affect the Bytes transferred statistic.
PureDisk uses a special data format to store the data on its content routers.
This data format has a per-segment overhead of a 22-byte header, a 12-byte
trailer, 16 bytes per block of 32 KB of data, and up to 7 padding bytes. The
padding bytes enable PureDisk to align the data according to its internal
data format, which requires data to be aligned on an 8-byte boundary.
For example, if you have a segment of exactly 128 KB, the total data
overhead is 34 + (4 * 16) = 96 bytes. As another example, if the segment is
128 Kb - 1 byte long, the total data overhead is 96 + 1 padding byte = 97
bytes.
If the segment is smaller than 32 KB, the number of overhead bytes can
vary between (32 + 16) = 48 and (32 + 16 + 7) = 55 bytes. If you disable
encryption, the header is 14 bytes long instead of 22.

Statistics for a restore job


Table 9-2 shows how to interpret the information in the Statistics tab for a restore
job.

Table 9-2 Lines in the Statistics tab for a restore job

Statistic Meaning

Restore Selection:

Total files The total number of files and directories that PureDisk restored.
The Directory count statistic reports the number of directories
restored.

Bytes total The total number of bytes in all files and directories that
PureDisk restored.

Target:
Reports 225
Obtaining detailed job reports

Table 9-2 Lines in the Statistics tab for a restore job (continued)

Statistic Meaning

Files new on target The number of new files that reside on the client after the
restore is complete. If you restore to the original directory and
overwrite the original files, PureDisk reports that there are no
new files. If you restore the files to a different directory for the
first time, PureDisk reports that all the files you restored are
new files on the client.

Bytes new on target The number of bytes occupied by the restored files. This statistic
is the number of bytes consumed by the files that are noted in
the Files new statistic.

Files modified on target The number of files that are different on PureDisk storage when
compared to the target directory for the restore. This number
counts the number of files on the client source that have
different content when compared to the files you restored.

Bytes modified on target The number of bytes occupied by the files in the Files modified
on target statistic.

Files unmodified on The number of files that are identical on both PureDisk storage
target and on the target directory. For example, if this value is 0, this
means that all the files you restored have changed since they
were backed up.

Bytes unmodified on The number of bytes occupied by the files in the Files
target unmodified on target statistic.

Network:

Bytes received by agent The number of bytes actually restored. If nothing has been
replaced, the value is 0.

Average restore rate The average transfer rate during the transmission of unique
data.

Data Uniqueness:

Unique items restored The number of items that PureDisk wrote to the target computer.
This statistic is a count of the number of files, directories, and
special files. It includes only the items that were different on
the target computer as compared to PureDisk storage. If nothing
has changed, this value is 0.

Unique items received Number of unique items that were included in this restore job.
The count excludes directories and special files.
226 Reports
Obtaining detailed job reports

Table 9-2 Lines in the Statistics tab for a restore job (continued)

Statistic Meaning

Restore Failures:

Error count The number of errors that were generated during the restore.

Files with errors The number of files that generated errors during the restore
and could not be restored.

Bytes with errors The total number of bytes represented in files that had an error
and could not be restored. For example, if a 1-MB file could not
be restored due to an error, this statistic is 1 MB.

ACL errors The total number of errors encountered when the job attempted
to restore ACLs. This value can be nonzero for a variety of
reasons. For example, the following conditions, and others, can
cause ACL restore errors:

■ An ACL could not be found on storage


■ A parent directory does not allow restore of ACLs.

Verification failures The number of files for which verification failed. This field is
applicable only if you backed up the files with verification
enabled.

Restore Successes:

Directory count The number of all unique directories in the path to each file that
PureDisk restored. Even if you restore only one file from a
directory, PureDisk includes that directory in this statistic
For example, assume that you restore the file1 and file2 from
the following paths:

■ /a/b/c/file1
■ /a/b/d/file2

In this case, the Directory count is 4.

Devices (Linux and UNIX systems only) The number of block and
character device files restored. This value is always 0 on
Windows systems.

Symbolic links (Linux and UNIX systems only) The number of symbolic links
restored. This value is always 0 on Windows systems.

Hard links (Linux and UNIX systems only) The number of hard links
restored. This value is always 0 on Windows systems.
Reports 227
Obtaining detailed job reports

Table 9-2 Lines in the Statistics tab for a restore job (continued)

Statistic Meaning

ACL (Windows systems only) The number of ACLs restored. This


value is always 0 on Linux and UNIX systems.

Verification successes The number of files for which verification succeeded. This field
is applicable only if you backed up the files with verification
enabled.

Time:

Start date/time The date and time that the job started.

Stop date/time The date and time that the job ended.

Restore time duration The amount of time that elapsed between when the job started
and when the job ended.

Statistics for a replication job


Table 9-3 shows how to interpret the information in the Statistics tab for a
replication job. Most of the statistics in the table are also reported for a PDDO
replication job; the table notes exceptions.

Table 9-3 Lines in the Statistics tab for a replication job

Statistic Meaning

Source selection:

Items new in source data The number of data objects replicated to the target storage pool
selection that were not included in a previously replicated PureDisk
backup.

Bytes new in source data The number of bytes replicated to the target storage pool that
selection have not been included in a previous PureDisk backup.

Items modified in source The number of data objects replicated that have been modified
data selection since the previous replication. This number counts the number
of data objects on the source that have different content when
compared to the files you replicated at an earlier time.

Bytes modified in source The number of bytes occupied by the data objects in the Items
data selection modified in source data selection statistic.

Items deleted in source The number of data objects that were deleted from the source
data selection data selection since the last replication.
228 Reports
Obtaining detailed job reports

Table 9-3 Lines in the Statistics tab for a replication job (continued)

Statistic Meaning

Bytes deleted in source The number of bytes occupied by the data objects in the Items
data selection deleted in source data selection statistic. This statistic is the
total number of bytes deleted from the source data selection.

Not included in PDDO replication statistics.

Errors:

Items with replication The number of data objects that generated errors during the
errors replication process.

Not included in PDDO replication statistics.

Bytes with replication The number of bytes of data in the Items with replication
errors errors statistic that generated errors during the replication
process.

Not included in PDDO replication statistics.

Replicated Data:

Items replicated The number of files, directories, or data items replicated to the
target storage pool.

Bytes replicated The number of bytes replicated to the target storage pool.

Network:

Bytes transferred The number of bytes transferred to the target storage pool. This
statistic includes bytes included in any overhead that was needed
for the transfer.

Time:

Start date/time The date and time that the job started.

Stop date/time The date and time that the job ended.

Replication time The amount of time that elapsed between when the job started
duration and when the job ended.

Statistics for a PDDO backup job


Table 9-4 shows how to interpret the information in the Statistics tab for a PDDO
job.
Reports 229
Obtaining detailed job reports

Table 9-4 Lines in the Statistics tab for a PDDO job

Statistic Meaning

Data
Reduction:

Global data The percentage of source data bytes that did not have to be transmitted to
reduction the content routers because of data reduction. Higher numbers correlate to
saving more efficiency.

Source
Selection:

Bytes The total number of bytes scanned by PDDO from the backup.
scanned
during
backup

Media The percentage of backup data that PureDisk found in the media server's
server cache.
cache hit
percentage

Network:

Bytes The number of bytes of new, nondeduplicated data that PureDisk sent to the
transferred content router for storage.
to content
router

Time:

Start The date and time that the job started.


date/time

Stop The date and time that the job ended.


date/time

Backup The amount of time that elapsed between when the job started and when the
time job ended.
duration

Files tab for a Job Details report


For a backup job, the Files tab provides information on the files that PureDisk
backed up from the client.
230 Reports
Obtaining detailed job reports

If a data lock password is enabled on an agent, this tab prompts you for the
password when you attempt to view it. For more information about the data lock
password, see the PureDisk Client Installation Guide.
The following information appears on this tab:

Agent The name of the agent from which the data selection was backed up.

Data selection The name of the data selection.

Folder Specifies the folder that contains the file on the client.

File The name of the file that PureDisk backed up.

Size The size of the file that PureDisk backed up.

Modified The date and time that the file was last modified. Also see the Enable
change detection backup feature. For more information about specific
backup features, see the PureDisk Backup Operator’s Guide.

Download A link you can click to restore the file.

This screen contains no information when PureDisk does not back up any files.
This situation is possible for an incremental backup if files have not changed.
Tip: You can restore a file by clicking a Download link in the Download column.

Errors tab for a Job Details report


For a backup job, the Errors tab provides a list of files that PureDisk did not back
up due to reasons such as the file was open for editing or the file was deleted in
between job steps. This tab does not display error messages that indicate why the
job failed.
If a data lock password is enabled on an agent, PureDisk prompts you for the
password when you attempt to view the Errors tab. For more information about
the data lock password, see the PureDisk Client Installation Guide.

Job log tab for a Job Details report


The Job log tab displays information about job processing and any errors PureDisk
encountered during processing.
The PureDisk agent cannot upload log files that are larger than 5 MB. PureDisk
truncates log files that are larger than 5 MB.
Reports 231
Obtaining detailed job reports

Job log tab for a backup job


Job logs are available for different types of jobs. If errors occur during the backup,
PureDisk displays message codes in the Job log tab.
Table 9-5 shows the message codes that PureDisk displays for a backup job.

Table 9-5 Message codes for a backup job

Error code Description Remark

1 QUEUED The job is queued.

2 SUCCESS The job step completed successfully.

3 ERROR The job step failed.

4 RUNNING The job step is running.

5 READY_TO_RUN A job step is preparing to run.

6 SUCCESS_WITH_ERRORS The job step ran successfully but


encountered nonfatal errors.

7 ABORTED_BY_USER The user stopped the job step.

8 ABORTED_BY_WATCHDOG The PureDisk watchdog stopped the


job step after the job step timed out.

9 RUNNING_HOLD The job is running, but the current


job step is on hold.

10 INCOMPLETE Not all required fields were


calculated.

100 UNKNOWN_LOCALLY This local file type is not supported.

101 NONEXISTING_LOCALLY File not found.

102 UNREADABLE_LOCALLY Access denied. An ACL does not


permit read access.

103 UNWRITABLE_LOCALLY On a restore operation, this message


means that the disk is full. Otherwise,
this means that a parent directory
does not allow write access.

104 LOCKED_LOCALLY The file is locked by another process.


Applies to Windows clients only.
232 Reports
Obtaining detailed job reports

Table 9-5 Message codes for a backup job (continued)

Error code Description Remark

200 CR_CONNECTION_ERROR Could not connect to the content


router.

201 NONEXISTENT_ON_SP PureDisk could not find the metadata


for this file in the metabase engine.

202 UNRETRIEVABLE_FROM_SP One of the following conditions is


present:

■ A read error occurred on the


storage pool.
■ The data selection does not allow
read permission. This agent does
not have read permission.

203 UNWRITABLE_TO_SP One of the following conditions is


present:

■ A write error occurred on the


storage pool.
■ The data selection does not allow
write permission. This agent does
not have write permission.

204 UNKNOWN_CR_ERROR This error is the generic content


router error. For more information,
see the agent or the server logs.

300 MB_CONNECTION_ERROR Unused.

301 NONEXISTENT_ON_MB See 201.

302 UNRETRIEVABLE_FROM_MB See 202.

303 UNWRITABLE_TO_MB See 203.

304 UNKNOWN_MB_ERROR Pdweb not running. Generic error log.


For more information, see the agent
or server logs.

Job log tab for a restore job


The job log of a restore job can contain the following misleading message for the
getfiles step:

Bandwidth limit set to 0 KB/s via agent configuration


Reports 233
About Data mining reports

The value of 0 indicates that no limit is set. It does not mean that no data is
transferred.

Examining lengthy job logs


PureDisk truncates job logs that exceed its maximum display length. If this
happens, perform the following procedure to display job information in another
window.
To examine lengthy job output
1 Click the Job log tab.
2 Click Download Whole Job Log.
This link appears above the upper-right corner of the right column.
3 On the pop-up window that appears, specify the interface you want to use to
display the file.
For the display method you choose, you might need to insert return characters
because the return characters might not appear correctly in the display.
In some cases, the job log might exceed the display length for this window.
The job log includes log information at the beginning and at the end, which
enables you to see what happened when the job started and when the job
finished. In this case, PureDisk deletes repetitive information from the middle
of the job log report.

About Data mining reports


A data mining policy collects information about all the files in a PureDisk storage
pool. When you run a data mining policy from the Web UI, PureDisk gathers
information from the metabase server for all data selections. It then summarizes
the information in a table. This report uses data mining to extract and present
information in report format and in XML format.
A data mining report displays information as of the last time a data mining policy
ran. If you move an agent to a different department, the data mining reports
reflect the updated department after you run a data mining policy again.
For example, if you move agent AGENT1 from location OLD to location NEW and then
click Data Mining Report, AGENT1 appears in OLD. If you run a data mining policy
and then click Data Mining Report, AGENT1 appears in NEW.
PureDisk provides a default data mining policy, but you must edit this policy and
enable it.
234 Reports
Enabling a data mining policy

The following describe how to edit, run, manipulate, and read data from data
mining policies:
■ See “Enabling a data mining policy” on page 234.
■ See “Running a data mining policy manually” on page 236.
■ See “Obtaining data mining policy output - the data mining report” on page 236.
■ See “Obtaining data mining policy output - the Web service report” on page 239.

Enabling a data mining policy


The following procedure explains how to enable a data mining policy.
To enable a data mining policy
1 Click Manage > Policies.
2 In the left pane, under Storage Pool Management Policies, click the plus (+)
sign to the left of Data Mining.
3 Select System policy for data mining.
4 Complete the General tab.
See “Completing the General tab for a data mining policy” on page 234.
5 Complete the Scheduling tab.
See “Completing the Scheduling tab for a data mining policy” on page 235.
6 Complete the Parameters tab.
See “Completing the Parameters tab for a data mining policy” on page 235.
7 Click Save.

Completing the General tab for a data mining policy


The General tab lets you name and define the policy.
To complete the General tab
1 (Optional) Type a new name for this policy in the Policy name field.
You do not have to rename this policy.
2 Select Enabled or Disabled.
This setting has the following options:
■ If you select Enabled, PureDisk runs the policy according to the schedule
in the Scheduling tab.
Reports 235
Enabling a data mining policy

■ If you select Disabled, PureDisk does not run the policy according to the
schedule in the Scheduling tab. This value is the default.
For example, you can use Disabled if you want to stop running this policy
during a system maintenance period, but you do not want to enter
information in the Scheduling tab to suspend, and then reenable, this
policy.

3 (Optional) Specify an escalation procedure.


Select times in the Escalate warning after or the Escalate error and terminate
after drop-down boxes to specify the elapsed time before PureDisk sends a
message.
PureDisk can notify you if an update does not complete within a specified
time. For example, you can configure PureDisk to send an email message to
an administrator if a policy does not complete in an hour.
If you select either of these options, create a policy escalation action that
defines the email message, defines its recipients, and associates the escalation
action with the policy. For more information, see the PureDisk Backup
Operator’s Guide.

Completing the Scheduling tab for a data mining policy


From the Scheduling tab, use the drop-down lists to specify when the policy is to
run.
To specify the schedule
◆ Specify the schedule details that define how frequently you want the policy
to run.

Completing the Parameters tab for a data mining policy


From the Parameters tab, use the radio button to specify the format of your report.
To specify a report format
◆ Choose a report format of Light (default) or Full.
These settings specify whether PureDisk includes file extension information
in the reports. File extensions include .mp3, .doc, .txt, and so on.
When the Light setting is in effect, the report does not contain file extensions.
When the Full setting is in effect, the report includes all file extension
information. The Full setting also increases the load on the storage pool,
particularly the metabase engines.
236 Reports
Running a data mining policy manually

Running a data mining policy manually


PureDisk creates one job for each metabase engine when you run a data mining
policy. The Web UI displays information for each job when it runs.
To run a data mining policy
1 Click Manage > Policies.
2 In the left pane, under Storage Pool Management Policies, click the plus sign
(+) to the left of Data Mining.
3 Select System policy for data mining.
4 (Conditional) Enable the policy.
You must enable a policy before you can run it. If the policy is disabled, on
the General tab, click Enabled and click Save.
5 In the right pane, click Run Policy.
6 Examine the output.
For information about how to examine the output, see one of the following:
See “Obtaining data mining policy output - the data mining report” on page 236.
See “Obtaining data mining policy output - the Web service report” on page 239.

Obtaining data mining policy output - the data mining


report
A data mining policy gathers statistics about the files in a storage pool. You can
use the following procedure to tabulate the statistics into a report.
To retrieve information for a storage pool or data selection from a data mining
policy
1 Make sure that you have the correct permissions to create and view reports.
The data mining report shows only the data selections a user is entitled to
view.
See “Permissions and guidelines for running and viewing reports” on page 214.
2 Make sure that a data mining policy has been run.
You can run the policy manually or you can configure PureDisk to run the
policy on a schedule.
For information about how to run a data mining policy, see the following:
See “Running a data mining policy manually” on page 236.
Reports 237
Obtaining data mining policy output - the data mining report

3 Click Manage > Agents.


4 In the left pane, select the scope of the data mining report.
You can obtain a data mining report on one of the following levels:
■ A storage pool
■ A location
■ A department
■ A client
■ A data selection

5 Click Data Mining Report in the right pane.


6 (Optional) Click Select in history in the upper right corner.
Perform this step if you want to view a data mining report from an earlier
data mining workflow. By default, PureDisk displays data mining information
from the most recent run of the data mining workflow.
The following list explains some of the information in the data mining report:

Total Storage Pool The volume of backup data, in bytes, on the content routers
Volume Used in this storage pool.

The information in this field is updated every 15 minutes.


The output in this field might not account for the data that
was added to storage during the last 15 minutes.
During installation, the agents are stored on the storage
pool. Consequently, if you run a data mining policy before
any backups have run, the report indicates that a small
amount of storage is already in use.

Total Storage Pool Data The volume of all data ever backed up to this storage pool,
Reduction Factor in bytes, that is retained and currently available for restores
divided by the global storage pool volume.

See “Interpreting the storage pool data reduction factor”


on page 238.

Total size on source The volume of files, in bytes, in this data selection on the
source client. This number includes all versions of all files.

Storage pool volume The estimated data volume, in bytes, stored on the storage
used pool’s content routers for this data selection. This statistic
is the source size of this data selection divided by the
storage pool data reduction factor.
238 Reports
Obtaining data mining policy output - the data mining report

Interpreting the storage pool data reduction factor


The data mining report shows the storage pool data reduction factor. This value
shows how much disk space the files consume on the storage pool content routers
relative to the amount of disk space that the files consumed on primary storage.
In the data mining reports display, the storage pool data reduction factor can be
1 or a value greater than 1, as follows:
■ If this factor is 1, the backed up source files consume the same capacity on the
storage pool content routers as on the source clients.
■ If this factor is greater than 1, this value is the factor by which PureDisk has
reduced the source volume through data reduction before it writes to the
content routers.
■ If this factor is less than 1, the backed up data consumes more space on the
content routers compared to the backed up volume of the source files. This
can be due to compression and encryption overhead.
For example, assume that PureDisk backed up eight 10-MB files within a data
selection. The eight files had identical content, and this particular content is new
to the storage pool. PureDisk determines that these identical files all have the
same fingerprint. Because they all have the same fingerprint, PureDisk stores
only one copy on the content routers.
The statistics are as follows:
■ The volume on source for this data selection is 8 X 10 MB = 80 MB (source size).
■ The volume on the storage pool for this data selection is 10 MB (storage pool
size).
■ The storage pool data reduction factor is 80/10 MB = 8.

Effect of compression on data reduction


If you enable compression for a data selection, the volume on the content routers
is even lower, and the storage pool data reduction factor is higher.

Effects of segmentation on data reduction


Segmentation affects data reduction because data reduction assumes that the
segment size for a file is the same every time you back it up. PureDisk might have
to re-segment a very large file every time it is backed up if the file grows or shrinks
between backups. If the file is re-segmented over multiple backups, data reduction
is less efficient.
Reports 239
Obtaining data mining policy output - the Web service report

A smaller segment size can yield better data reduction rates. However, performance
can degrade because of the higher maintenance costs involved in managing a
larger number of segments.
A larger segment size can yield better performance, but the data reduction rate
can degrade. Larger segments can also use a higher amount of disk space.
PureDisk considers the following factors when it segments the file:
■ The default segment size for the data selection type or the segment size you
specify.
■ The maximum number of segments allowed, which is 5,120 segments.
■ The maximum segment size allowed, which is 50 MB.

Obtaining data mining policy output - the Web service


report
After you run a data mining policy, you can display your output through the data
mining Web service.
For information on how to obtain a data mining report, see the following:
See “Obtaining data mining policy output - the data mining report” on page 236.
To obtain a data mining Web service report, type the following into your browser:

https://url/spa/ws/ws_datamining.php?login=login&passwd=pwd&action=getReport&runid=num

Table 9-6 shows the arguments in the URL.

Table 9-6 Arguments in the data mining Web services reports

Argument Meaning

url The URL for the storage pool authority. For example:
100.100.100.100.

login The storage pool authority administrator login. For example: root.

pwd The storage pool authority administrator password. For example:


mypwd.
240 Reports
Obtaining data mining policy output - the Web service report

Table 9-6 Arguments in the data mining Web services reports (continued)

Argument Meaning

num The number of the data mining policy run that you want to display in
report format. PureDisk retains the last 10 runs of the data mining
workflow.

For example, if you want to display the most recent policy run, specify
1. If you want to display information from the policy run just before
the most recent, specify 2. If you ran the data mining policy every day
for the last 10 days and you want to display the oldest run, specify 10.

To verify the report output with data mining policy runs, compare the
timestamp in the header of the report with the times of your data
mining policy runs.

When you run the Web service report to obtain data mining output, you retrieve
information on all data selections in the storage pool. You cannot narrow the
report to include information for only one data selection.
Information about how to report on only one data selection is available.
See “Obtaining data mining policy output - the data mining report” on page 236.
For example, assume that you type the following URL:

https://valhalla.minnesota.com/spa/ws/ws_datamining.php?login=root&passwd=root&action=
getReport&runid=1

PureDisk returns output as follows:

This XML file does not appear to have any style information
associated with it. The document tree is shown below.
-<MBDatamining TimeStamp="2007-08-30 03:20:02 PM">
<filtre>*</filtre>
-<mbe_range_statistics>
-<mbe id="1">
-<dataselection id="4" dataselectionname="desktop" agentid="2"
agentname="TRAVELSCRABBLE" locationid="0" departmentid="0"
locationname="Unknown location" departmentname="Unknown department"
ostype="10">
<location name="Unknown location"/>
<department name="Unknown department"/>
<sizeOnSource_dataselection
unit="bytes">153405265</sizeOnSource_dataselection>
<sizeOnStoragePool_dataselection
unit="bytes">4176478208</sizeOnStoragePool_dataselection>
Reports 241
Obtaining data mining policy output - the Web service report

-<ACCESSRANGE>
-<item id="-1 day">
<amountoffiles>13</amountoffiles>
<totalfilesize>33017267</totalfilesize>
</item>
-<item id="1 day-1 week">
<amountoffiles>16</amountoffiles>
<totalfilesize>60604429</totalfilesize>
</item>
-<item id="1 month-1 year">
<amountoffiles>45</amountoffiles>
<totalfilesize>56263445</totalfilesize>
</item>
-<item id="1 week-1 month">
<amountoffiles>4</amountoffiles>
<totalfilesize>3520124</totalfilesize>
</item>
</ACCESSRANGE>
-<MODRANGE>
-<item id="+1 year">
<amountoffiles>10</amountoffiles>
<totalfilesize>1814927</totalfilesize>
</item>
-<item id="-1 day">
<amountoffiles>2</amountoffiles>
<totalfilesize>29874649</totalfilesize>
</item>
-<item id="1 day-1 week">
<amountoffiles>1</amountoffiles>
<totalfilesize>207</totalfilesize>
</item>
-<item id="1 month-1 year">
<amountoffiles>61</amountoffiles>
<totalfilesize>70561729</totalfilesize>
</item>
-<item id="1 week-1 month">
<amountoffiles>4</amountoffiles>
<totalfilesize>51153753</totalfilesize>
</item>
</MODRANGE>
-<SIZERANGE>
-<item id="0-10KB">
<amountoffiles>20</amountoffiles>
242 Reports
Web service reports

<totalfilesize>26942</totalfilesize>
</item>
-<item id="100KB-1MB">
<amountoffiles>19</amountoffiles>
<totalfilesize>7200946</totalfilesize>
</item>
-<item id="10KB-100KB">
<amountoffiles>29</amountoffiles>
<totalfilesize>1745269</totalfilesize>
</item>
-<item id="10MB-100MB">
<amountoffiles>4</amountoffiles>
<totalfilesize>129892970</totalfilesize>
</item>
-<item id="1MB-10MB">
<amountoffiles>6</amountoffiles>
<totalfilesize>14539138</totalfilesize>
</item>
</SIZERANGE>
-<TYPES>
-<item id="0">
<amountoffiles>78</amountoffiles>
<totalfilesize>153405265</totalfilesize>
</item>
</TYPES>
</dataselection>
</mbe>
</mbe_range_statistics>
-<dataselectionlist_SIS_reporting>
<global_storagepool_VOL
unit="bytes">4176478208</global_storagepool_VOL>
<global_storagepool_SIS>0.03673077108511</global_storagepool_SIS>
</dataselectionlist_SIS_reporting>
<MBDataminingHistory/>
</MBDatamining>

Web service reports


To use the Web service reports you must type a URL into your browser to navigate
to a Web service page. On this Web service page you enter login and password
information, as well as a request for a specific report. The reports display in XML
Reports 243
Web service reports

format. You can import the XML output to a spreadsheet. See the following section
for more information:
See “Importing report output into a spreadsheet” on page 249.

Caution: For security reasons, use a Web browser that uses POST requests, not
GET requests, when retrieving Web service reports. For example, Microsoft
Internet Explorer does not use POST requests and is not secure.

You can also follow your spreadsheet’s instructions for importing the XML data.
For example, the following URL contains login information, password information,
and a request for information about successful job runs:

https://100.100.100.100/spa/ws/ws_getsuccessfuljobs.php?login=root&passwd=root

Note: The Web UI URL parameters are case sensitive. Make sure you type them
exactly as shown in this chapter. The ampersand (&) character acts as a separator
for the fields in the URL. The bracket characters in the following sections [ ]
represent optional URL fields.

The following sections describe reports that you can obtain through the Web
services:
■ See “Job status Web service reports” on page 243.
■ See “Dashboard Web service reports” on page 246.
■ See “Obtaining data mining policy output - the Web service report” on page 239.

Job status Web service reports


You can obtain the following types of job status reports from the Web service
reporting tool:

ws_getsuccessfuljobs.php Information about jobs that exit successfully.

ws_getpartialjobs.php Information about jobs that exited with a status of


Partial success.

ws_getfailedjobs.php Information about jobs that exited with a status of


Failed.

The URL format for a Web service report on job statuses is as follows:

https://url/spa/ws/web_service?login=login&passwd=pwd[&filter][&filter]
244 Reports
Web service reports

Table 9-7 shows the arguments in the URL.

Table 9-7 Arguments in the job status Web services reports

Argument Meaning

url The URL for the storage pool authority. For example:
100.100.100.100.

web_service Specifies the type of Web service. The job status reports
generate information about successful, partially
completed, and failed jobs. Type one of the following:

■ ws_getsuccessfuljobs.php
■ ws_getpartialjobs.php
■ ws_getfailedjobs.php

login The storage pool authority administrator login. For


example: root.

pwd The storage pool authority administrator password. For


example: mypwd.

filter (Optional) One or more filters. If you specify a filter, the


report displays only the data that matches the filter. If
you specify more than one filter, use the ampersand (&)
character to separate each filter. The filter names are
case sensitive.

See Table 9-8 on page 244.

Table 9-8 shows the filters you can specify on a Web service URL for the job status
reports.

Table 9-8 Filters for job status reports

Filter Meaning

locationName=name Returns only jobs from the specified location. For


example: Brussels.

departmentName=name Returns only jobs from the specified department. For


example: hr.
Reports 245
Web service reports

Table 9-8 Filters for job status reports (continued)

Filter Meaning

fromJobID=id Returns only jobs that have a job ID that is equal to or


greater than the job ID you specify.

To find a job ID, click Details in the right pane for a job
that has finished. The ID is on the General tab.

For example: 465.

fromDate=mm-dd-yyyy Returns only jobs that started on or after the specified


date. For example: 06-30-2007.

toDate=mm-dd-yyyy Returns only jobs that ended on or before the specified


date. For example: 10-05-2007.

workflowName=name Returns only jobs for a particular workflow. To see the


list of possible values for name, click Manage > Policies
and observe the left pane of the Web UI. This pane
shows the list of possible policies and workflows. Specify
the policy or workflow name as shown in the Web UI.
Examples:

■ Data Removal
■ MS Exchange Backup

Note: Specify the workflow name exactly as shown in


the Web UI. The name is case sensitive.

For example, assume that you want to examine statistics for restore jobs. You can
enter the following URL:

https://100.100.100.100/spa/ws/ws_getsuccessfuljobs.php?login=root&passwd=root&workflo
wName=Files and Folders Restore

The following shows partial output:

<?xml version="1.0" encoding="iso-8859-1" ?>


- <jobs>
- <job>
<jobID>2</jobID>
<agentID>1000000</agentID>
<agentName>SPA</agentName>
<locationName>my location</locationName>
<departmentName>my department</departmentName>
<executionStatusID>2</executionStatusID>
<executionStatusName>SUCCESS</executionStatusName>
246 Reports
Web service reports

<workflow>Restore Workflow</workflow>
<scheduledStartTime>1151762912</scheduledStartTime>
<startDate>1151762916</startDate>
<finishDate>1151762970</finishDate>
<dataselectionID>2</dataselectionID>
<dataselectionName>reroute</dataselectionName>
<statistics />
</job>
- <job>
<jobID>4</jobID>
<agentID>1000000</agentID>
<agentName>SPA</agentName>
<locationName>my location</locationName>
<departmentName>my department</departmentName>
<executionStatusID>2</executionStatusID>
<executionStatusName>SUCCESS</executionStatusName>
<workflow>Restore Workflow</workflow>
<scheduledStartTime>1151763438</scheduledStartTime>
<startDate>1151763439</startDate>
<finishDate>1151763545</finishDate>
<dataselectionID>2</dataselectionID>
<dataselectionName>reroute</dataselectionName>
<statistics />
</job>
.
.
.

The preceding output has been truncated at the end for inclusion in this manual.
If you run a report that contains information about backup jobs, the information
PureDisk returns contains the same statistics that you can obtain from clicking
Data Mining Report in the left pane after a data mining workflow was run.

Dashboard Web service reports


This dashboard report includes status information for all PureDisk client agents,
server agents, and services in the storage pool. This report includes all the
information that PureDisk generates for the dashboard reports.
See “About Dashboard reports” on page 249.
You can write an application to parse or extract pieces of information from this
report’s output and show that in a Web page as a custom dashboard. The
information for this report is not generated in real time. PureDisk refreshes the
Reports 247
Web service reports

data every 15 minutes. The timestamp is shown at the beginning of the XML
report.
The report is formatted in XML. The URL format is as follows:

https://url/spa/ws/ws_dashboard.php?login=login&passwd=pwd&filterType=type&filterID=id
&action=getDashBoard

Note: Type the preceding URL on one continuous line.

Table 9-7 shows the arguments in the URL.

Table 9-9 Arguments in the Web services reports

Argument Meaning

url The URL for the storage pool authority. For example:
100.100.100.100.

login The storage pool authority administrator login. For example: root.

pwd The storage pool authority administrator password. For example:


mypwd.

type The type of record on which you want to filter.

Specify agent.

id To obtain this number for an agent, complete the following steps:

■ Click the Manage > Agent.


■ In the left pane, select an agent.
■ In the right pane, click Agent Dashboard.
Depending on the agent you choose, you might need to pull down
More Tasks and select Agent Dashboard.
■ Visually inspect the right pane to obtain the agent number from
the Agent ID field.

Tip: Obtain this id number before you start to type the URL for the
Web service report. If you begin to type the report URL into a browser’s
address field, and have to click in the PureDisk Web UI to retrieve this
id information, you lose the information you typed into the address
field. Alternatively, you can also retrieve the id in a different window.

For example, assume that you want to obtain a dashboard Web service report for
an agent. You can enter the following URL:

https://valhalla.minnesota.com/spa/ws/ws_dashboard.php?login=root&passwd=root&filterTy
pe=agent&filterID=3&action=getDashBoard
248 Reports
Web service reports

The following shows partial output:

<DashBoard TimeStamp="2008-03-15 09:43:37">


<StoragePoolID>33</StoragePoolID>
<Name>valhalla</Name>
<Description/>
<Location id="1">mn</Location>
<SystemDS id="1">System DS for STP 33</SystemDS>
<SelectedAgent/>
<SelectedLocation/>
<SelectedDepartment/>
-
<Agents>
-
<Agent id="33000000">
<ID>33000000</ID>
<IsServerAgent>1</IsServerAgent>
<HostName>10.80.139.49</HostName>
<Description/>
<OSVariant id="">Not available</OSVariant>
<OSExtensions/>
<Status id="2">ACTIVE</Status>
<IPAddress>10.80.139.49</IPAddress>
<OS id="20">Linux</OS>
<Version>6.5.0.8987</Version>
<Department id="1">qe</Department>
<Location id="1">mn</Location>
<MetabaseEngine id="1"
agentid="33000000">10.80.139.49</MetabaseEngine>
<Controller id="1" agentid="33000000">10.80.139.49</Controller>
<ConnectionStatus>Connected</ConnectionStatus>
-
<ConnectionDetails>
<FromIP>10.80.139.49</FromIP>
<SessionID>pdagent</SessionID>
<Sent unit="MB">45.11</Sent>
<Received unit="MB">1.34</Received>
<Version/>
</ConnectionDetails>
-
<Jobs>
-
<Job id="77">
Reports 249
About Dashboard reports

<JobID>77</JobID>
<AgentID>33000000</AgentID>
<Workflow id="13500">Maintenance</Workflow>
<Policy id="8">System policy for Maintenance</Policy>
<PolicyRunID>28</PolicyRunID>
<Scheduled>2008-03-15 06:20:01</Scheduled>
<Start>2008-03-15 06:20:03</Start>
<Stop>2008-03-15 06:20:23</Stop>
<Status id="2">SUCCESS</Status>
</Job>
</Jobs>
<JobSteps/>
-
<Statistics id="33000000" TimeStamp="2008-03-15 09:30:01"
xml:base="/Storage/var/stats_33000000.xml">
-
.
.
.

The preceding output has been truncated at the end for inclusion in this manual.

Importing report output into a spreadsheet


You can import the XML formatted output from a PureDisk Web service report
into a spreadsheet, such as a Microsoft Excel spreadsheet. These instructions are
written in general terms. For more information, see your spreadsheet’s
documentation.
To import Web service data into a Microsoft Excel spreadsheet
1 Use your Web browser to save the output as an XML file.
2 Import the data into your spreadsheet.
For example, in Microsoft Excel, specify Data > Import External Data > Import
Data. When it prompts you, specify the file to which you saved the XML
output.

About Dashboard reports


Dashboards provide quick reports on system status and activity. These reports
are status reports on node capacity, storage pool activity, and agents.
The following sections describe how to obtain these reports:
250 Reports
About Dashboard reports

■ See “Displaying the Capacity dashboard” on page 250.


■ See “Displaying the Activity dashboard” on page 251.
■ See “Displaying the Server agent dashboard” on page 252.
■ See “Displaying the Client agent dashboard” on page 253.
Dashboard reports are available when a central reporting storage pool authority
is installed.
See “Central storage pool authority reports” on page 254.

Displaying the Capacity dashboard


The capacity dashboard shows information about total and used capacity on the
content router and metabase engine. For a content router, this dashboard shows
the total amount of space available to the content router in /Storage/data, the
amount of used disk space, and the number of data segments already stored on
the content router. For a metabase engine, this report shows the amount of disk
space used by the latest version records only, by all version records, and the
percentage of disk space used.
PureDisk generates and updates the information for the capacity dashboard every
15 minutes. Therefore, if you check the capacity dashboard immediately after a
backup completes, the information in the display might not reflect the conditions
that result from that backup.
Reports 251
About Dashboard reports

To display the capacity dashboard


1 Click Settings > Topology.
2 In the left pane, select the storage pool.
3 In the right pane, click Capacity Dashboard.
The following figure shows an example capacity dashboard.

Displaying the Activity dashboard


The activity dashboard contains information about the PureDisk services that are
running on all of the nodes in your storage pool.
To refresh the data in this dashboard, press function key F5.
252 Reports
About Dashboard reports

To display the activity dashboard


1 Click Settings > Topology.
2 In the left pane, select the storage pool.
3 In the right pane, click Activity dashboard.
The following figure shows an example activity dashboard.

Displaying the Server agent dashboard


The server agent dashboard shows current activity on the server agents that reside
on the storage pool nodes. It displays information about the last completed job,
all current jobs, and all current job steps.
To refresh the data in this dashboard, press function key F5.
Reports 253
About Dashboard reports

To display the server agent dashboard


1 Click Settings > Topology.
2 In the left pane, select the storage pool.
3 In the right pane, click Server agent dashboard.
The following figure shows an example server agent dashboard.

Displaying the Client agent dashboard


The agent dashboard shows information about running jobs and the jobs that
completed most recently on a selected department or a selected agent.
To refresh the data in this dashboard, press function key F5.
To display the agent dashboard
1 Click Manage > Agent.
2 Expand the tree view in the left pane until the department or agent you want
displays.
Click the plus sign (+) next to each entity to expand the tree.
254 Reports
Central storage pool authority reports

3 Select a department or agent.


4 Click Agent Dashboard in the right pane.
The following figure shows an example agent dashboard.

Central storage pool authority reports


In a large PureDisk environment, you can configure multiple storage pools. You
can configure one of these storage pools to be the central storage pool. You can
enable this capability at installation time or at a later date.
From the central storage pool authority, you can generate and view licensing and
capacity reports for all the storage pools in your environment. For more
information, see the user authentication information in the PureDisk Getting
Started Guide.
For more information about central reporting and the central reporting
dashboards, see the following:
■ See “Displaying the Central Reporting dashboard” on page 254.
■ See “Updating the Central Reporting dashboard” on page 259.
■ See “About central reporting” on page 300.

Displaying the Central Reporting dashboard


The following procedure explains how to retrieve a dashboard report that contains
licensing and capacity information.
Reports 255
Central storage pool authority reports

To retrieve licensing and capacity information


1 Verify that you have the Central Report permission.
Only users with Central Report permissions have rights to view and update
the reports. For more information about permissions, see the PureDisk Client
Installation Guide.
2 Click Settings > Central SPA.
3 In the left pane, click Storage Pool Management.
4 In the right pane, click Central SPA Dashboard.
5 Click one of the following tabs in the dashboard display:
■ Enterprise License Report (default view).
See “Enterprise License Report tab” on page 255.
■ Storage Pools.
See “Storage Pools tab” on page 257.
■ Licenses / Features.
See “Licenses \ Features tab” on page 257.
■ Capacity Usage Report.
See “Capacity Usage Report tab” on page 258.

Enterprise License Report tab


The Enterprise License Report tab displays a quick overview of your license
status. Table 9-10 explains the columns on this tab.
256 Reports
Central storage pool authority reports

Table 9-10 Enterprise License Report tab content

Column heading Information or data

Storage Edition / Agents Lists each individual feature license and lists
the PureDisk edition license that is installed
on this storage pool. Certain PureDisk
features require separate licenses.

The report lists each edition or license in its


own, separate row. There can be only one
PureDisk edition. There can be more than
one feature license; for example: Windows
Application & Database Pack, Standard
Agent, and so on.

This dashboard report does not include


information about the PureDisk
Deduplication Option (PDDO). If PDDO is
enabled in this storage pool, you can retrieve
reporting data from the NetBackup media
server.

Licensed For a feature license row, this column lists


the number of clients that can use this
feature.

For the PureDisk edition row, this column


lists the amount of front end data, on the
client, that you can protect with PureDisk
backups.

Used For a feature license row, this column lists


the number of clients that currently use this
feature.

For the PureDisk edition row, this column


displays the amount of front end, client
storage that this storage pool currently
protects.

Alerts Displays alerts under the following


conditions:

■ When the used capacity is greater than


the licensed capacity.
■ When the number of features used is
greater than the number of features
licensed.
Reports 257
Central storage pool authority reports

Storage Pools tab


The Storage pools tab displays the connectivity status of all storage pools that
are registered to the central storage pool. Table 9-11 explains the columns on this
tab.

Table 9-11 Storage Pools tab content

Column heading Information or data

SPA Name The name of each registered storage pool.

SPA Version The PureDisk release level that is installed


on each registered storage pool.

FQDN The address of each registered storage pool.

Connectivity status An icon that represents the connectivity


status between each registered storage pool
and the central storage pool.

Active License keys The number of valid license keys that are
installed on the registered storage pool. You
can install the same license key on multiple
storage pools, but this report lists each key
only once.

Licenses \ Features tab


For each registered storage pool, PureDisk displays all installed licenses in the
Licenses \ Features tab. PureDisk updates license keys with time restrictions
before it displays the tab. Unlike the Storage Pools tab, the Licenses \ Features
tab can display a license key more than once. Table 9-12 explains the columns on
this tab.

Table 9-12 Licenses \ Features tab content

Column heading Information or data

License key The license key content.

Feature The feature that is enabled by that key row.

Expiry The license key expiration date. If this field


shows that a particular license is due to
expire, contact your Symantec sales
representative.
258 Reports
Central storage pool authority reports

Table 9-12 Licenses \ Features tab content (continued)

Column heading Information or data

Capacity The capacity that is enabled by that license


key. If this field shows that a particular
license capacity is about to be exceeded,
contact your Symantec sales representative.

Locations The computer upon which you installed the


license key.

To view a licenses and features report for a particular license type or all types
◆ Use the Filter on feature pull-down menu to select a license type.
Your choices are as follows:
■ All
■ Premium Infrastructure
■ Windows Application & Database Pack
■ Standard Agent

Capacity Usage Report tab


This report shows total capacity statistics. The Last updated on column of this
report shows the date of the last update for each storage pool.Table 9-13 explains
the columns on this tab.

Table 9-13 Capacity Usage Report tab content

Column heading Information or data

SPA Name The name of each registered storage pool.

Last updated on The date and time when the report data was
created.

Used capacity The amount of front end, client storage that


this storage pool currently protects. This
column does not describe the amount of
storage occupied by backup data in the
PureDisk storage pool.

Standard Agents The number of backup and restore or storage


pool agents deployed in this storage pool.
Reports 259
Central storage pool authority reports

Table 9-13 Capacity Usage Report tab content (continued)

Column heading Information or data

Windows application and database pack The number of application program agents
deployed in this storage pool.

Updating the Central Reporting dashboard


PureDisk updates the license data for these reports daily. If you click the update
link in each report, PureDisk updates the data from all the storage pools that are
configured under the central storage pool. For example, you might want to update
license data after you delete license keys or add additional keys to increase your
licensed capacity.
The update can take considerable time. If PureDisk does not receive a response
from a storage pool within the configured time period, PureDisk marks the storage
pool as temporarily unavailable. If a storage pool is temporarily unavailable,
PureDisk includes the last available information in the report. All reports show
the date of the last update.
To view the latest information
◆ Click update from any report.
260 Reports
Central storage pool authority reports
Chapter 10
Log files and auditing
This chapter includes the following topics:

■ About the log file directory

■ Audit trail reporting

■ Setting debugging mode

About the log file directory


PureDisk writes log files to the following directory on each PureDisk node:

/Storage/log

For each seven-day interval, PureDisk retains up to 1000 lines of logging messages
in the active log file in /Storage/log. Note that log files from PureDisk services
are often greater than 5 MB in length, but PureDisk does not retain job log files
that are greater than 5 MB in length.
PureDisk uses the standard Linux log rotation mechanism to rotate the audit log
every seven days. Log rotation ensures that the log files do not become too large.
PureDisk moves old logging information into separate files and compresses the
files to save space. PureDisk does not remove old log files. You can examine the
old log files in /Storage/log. The old files are named
/Storage/log/audit.log.1.bz2, /Storage/log/audit.log.2.bz2, and so on.
The last 1000 lines of every log file are always accessible in the /Storage/log
directory.
The following describe log files:
■ See “Content router log files” on page 262.
■ See “Metabase engine log file” on page 265.
■ See “Workflow engine log file” on page 268.
262 Log files and auditing
About the log file directory

■ See “Server agent log files” on page 270.


■ See “About international characters in log files” on page 272.

Content router log files


All content routers log files are located in the /Storage/log/spoold/ directory.
The following describe the content router log files:
■ See “The spoold.log file” on page 262.
■ See “The storaged.log file” on page 263.
■ See “Logging and debugging options” on page 264.

The spoold.log file


PureDisk writes all incoming connections to the content router in the
/Storage/log/spoold/spoold.log log file.

Example 1. The following is an incoming multistream backup (pdbackup.exe)


from a 32-bit Windows agent, version 6.6.0.6792 (192.168.163.1) for data selection
7:

January 17 16:10:11 INFO [1076910400]: Task Manager: started task 0 [thread 1079552320]
for 192.168.163.1:1636
January 17 16:10:11 INFO [1079552320]: Remote is using libcr Version 6.5.0.6792, Protocol
Version 6.1 running on
WIN32. Agent pdbackup.exe requesting access for DataSelection ID 7

Example 2. The following shows the metabase engine (192.168.163.132 = MBE IP)
requesting a POList (MBE-CLI application) from system data selection 1:

January 17 16:10:16 INFO [1076910400]: Task Manager: started task 0 [thread 1079552320]
for
192.168.163.132:51050
January 17 16:10:16 INFO [1079552320]: Remote is using libcr Version 6.5.0.6792, Protocol
Version 6.1 running on
Linux-x86_64. Agent MBE-CLI requesting access for DataSelection ID 1

If you want an overview of all incoming single-stream backups (PutFiles), you can
search the spoold.log file, as follows:

PureDisk:/Storage/log/spoold # grep -B 1 PutFiles spoold.log


Log files and auditing 263
About the log file directory

January 17 14:10:12 INFO [1076910400]: Task Manager: started task 0 [thread 1079552320]
for 192.168.163.1:3738
January 17 14:10:12 INFO [1079552320]: Remote is using libcr Version 6.5.0.6792, Protocol
Version 6.1 running on
WIN32. Agent PutFiles requesting access for DataSelection ID 1
--
January 17 15:10:10 INFO [1076910400]: Task Manager: started task 0 [thread 1079552320]
for 192.168.13.41:4438
January 17 15:10:10 INFO [1079552320]: Remote is using libcr Version 6.5.0.6792, Protocol
Version 6.1 running on
WIN32. Agent PutFiles requesting access for DataSelection ID 7
--
January 17 16:10:11 INFO [1076910400]: Task Manager: started task 0 [thread 1079552320]
for 192.168.163.14:1638
January 17 16:10:11 INFO [1079552320]: Remote is using libcr Version 6.5.0.6792, Protocol
Version 6.1 running on
WIN32. Agent PutFiles requesting access for DataSelection ID 4
--
January 17 17:10:13 INFO [1076910400]: Task Manager: started task 0 [thread 1079552320]
for 192.168.163.1:2201
January 17 17:10:13 INFO [1079552320]: Remote is using libcr Version 6.5.0.6792, Protocol
Version 6.1 running on
WIN32. Agent PutFiles requesting access for DataSelection ID 9

In the preceding grep(1) command, the -B 1 parameter specifies to show the line
before the match, so the connecting client IP address is also displayed.

The storaged.log file


PureDisk records processing information related to the content router spooler
queue, the content router database, and the /Storage/data directory in the
/Storage/log/spoold/storaged.log log file.

For each transaction log, PureDisk logs the number of actions per type. For
example:

August 25 14:58:45 INFO [1077967168]:


Queue processing triggered by external request.
August 25 14:58:45 INFO [1077967168]:
Starting sort of tlog file range 521 - 525.
August 25 14:58:45 INFO [1077967168]:
Finished sort of tlog file range 521 - 525 in 0 seconds.
264 Log files and auditing
About the log file directory

August 25 14:58:45 INFO [1077967168]:


Preparing to process transaction log /Storage/queue/sorted-521-525.tlog
August 25 14:58:45 INFO [1077967168]:
Synchronization for transaction log /Storage/queue/sorted-521-525.tlog started,
14306 transactions pending.
August 25 14:58:49 INFO [1077967168]:
Number of data store commits: 225
August 25 14:58:49 INFO [1077967168]:
Time required to build index on objects2 table: 0.211361
August 25 14:58:49 INFO [1077967168]:
Time required to drop objects table: 0.013812
August 25 14:58:49 INFO [1077967168]:
Time required to rename objects2 table to objects: 0.000773
August 25 14:58:49 INFO [1077967168]:
Transaction log 521-525 Completed. Expect: 14306 (0.82MB) Commit: 14306 (0.00MB) Retry: 0
Log: /Storage/queue/sorted-521-525.tlog SO: Add 0, Ref Add 14298, Ref Add Fail: 0,
Ref Del 0 DO: Add 0, Ref Add 0, Ref Add Fail: 0, Ref Del 4 TASK: Add 2, End 2, End All 0,
Del 0 DCID: SO 0, SO Fail 0, DO 0, DO Fail 0 MARKER: 0, Fail 0
August 25 14:58:49 INFO [1077967168]: Update last committed tlogid from 520 to 525
August 25 14:58:49 INFO [1077967168]: Start processing delayed operations of
'/Storage/queue/sorted-521-525.delayed'.
August 25 14:58:49 INFO [1077967168]: Completed processing of delayed operations of
'/Storage/queue/sorted-521-525.delayed'.

Logging and debugging options


If you want to increase logging in all content router log files, modify the content
router configuration file. For information about how to change configuration files,
see the following:
See “About the configuration files” on page 321.
Log files and auditing 265
About the log file directory

To increase logging in all content router log files


1 In the PureDisk administrator Web UI, click Settings > Configuration >
Configuration File Templates > PureDisk Content Router > Default ValueSet
for PureDisk ContentRouter > Logging > Logging.
2 Change the All OS: value to full,thread.
3 Type the following command to restart the content router:

# /etc/init.d/puredisk restart pdcr

If you want to specify that the log files include more information, include the
--trace parameter when you restart the content router. For example:

# /etc/init.d/puredisk stop pdcr


# /opt/pdcr/bin/spoold --trace /Storage/log/spoold/trace.log

If you specified the --trace parameter, later you can specify the following
to disable tracing:

# /etc/init.d/puredisk restart pdcr

Metabase engine log file


The metabase engine log file is located in /Storage/log/mbe.log.
Most of the information logged in the mbe.log file is related to the activity of
importing information into the metabase engine database. Each metabase import
is defined by a task ID, which consists of the data selection ID and the job step
start time. For example, [Task [4-1200200137663]] is an import for data selection
4, started on Sat Jan 12 22:55:37 2008, which is a converted UNIX time stamp.
The following example mbe.log file lists imports:

Sat Jan 12 22:51:33 CST 2008 <INFO> [Task [2-1200199892625]]


(DISPATCHER) Dispatcher has set a task to approved.
Sat Jan 12 22:51:34 CST 2008 <INFO> [Task [2-1200199892625]]
(DOWNLOAD 0) Download: A job has arrived!
Sat Jan 12 22:51:40 CST 2008 <INFO> [Task [2-1200199892625]]
(DOWNLOAD 0) Download: finished!
Sat Jan 12 22:51:41 CST 2008 <INFO> [Task [2-1200199892625]]
(DISPATCHER) Dispatcher has set a task to approved.
Sat Jan 12 22:51:42 CST 2008 <INFO> [Task [2-1200199892625]]
266 Log files and auditing
About the log file directory

(SORT0) Starting sort for /Storage/tmp/pre57295.tmp


Sat Jan 12 22:51:43 CST 2008 <INFO> [Task [2-1200199892625]]
(DISPATCHER) Dispatcher has set a task to approved.
Sat Jan 12 22:51:44 CST 2008 <INFO> [Task [2-1200199892625]]
(SPLIT0) Starting to split
Sat Jan 12 22:51:47 CST 2008 <INFO> [Task [2-1200199892625]]
(SPLIT0) Splitting file
Sat Jan 12 22:51:48 CST 2008 <INFO> [Task [2-1200199892625]]
(SPLIT0) Generating dirfile
Sat Jan 12 22:51:49 CST 2008 <INFO> [Task [2-1200199892625]]
(SPLIT0) Done converting
Sat Jan 12 22:51:49 CST 2008 <INFO> [Task [2-1200199892625]]
(DISPATCHER) Dispatcher has set a task to approved.
Sat Jan 12 22:51:50 CST 2008 <INFO> [Task [2-1200199892625]]
(IMPORT0) ImportThread has work to do.
Sat Jan 12 22:51:50 CST 2008 <INFO> [Task [2-1200199892625]]
(IMPORT0) ImportThread going to import /Storage/tmp/bulkInsert57297
Sat Jan 12 22:51:50 CST 2008 <INFO> [Task [2-1200199892625]]
(IMPORT0) ImportThread going to import a raw POlist
Sat Jan 12 22:51:51 CST 2008 <INFO> [Task [2-1200199892625]]
(IMPORT0) ImportThread going to import a raw POlist
Sat Jan 12 22:51:51 CST 2008 <INFO> [Task [2-1200199892625]]
(IMPORT0) Import done
Sat Jan 12 22:51:51 CST 2008 <INFO> [Task [2-1200199892625]]
(DISPATCHER) Dispatcher has set a task to approved.
Sat Jan 12 22:51:52 CST 2008 <INFO> [Task [2-1200199892625]]
(EVAL0) Evaluating DataSelection 2.
Sat Jan 12 22:51:53 CST 2008 <INFO> [Task [2-1200199892625]]
(EVAL0) No duplicate or minor PO's were detected.
Sat Jan 12 22:51:53 CST 2008 <INFO> [Task [2-1200199892625]]
(EVAL0) DataSelection 2 succesfully evaluated.
Sat Jan 12 22:51:55 CST 2008 <INFO> (DEPARTSERVLET) Task has
completed:Task [2-1200199892625]
Sat Jan 12 22:55:38 CST 2008 <INFO> [Task [4-1200200137663]]
(DISPATCHER) Dispatcher has set a task to approved.
Sat Jan 12 22:55:38 CST 2008 <INFO> [Task [4-1200200137663]]
(DOWNLOAD 0) Download: A job has arrived!
Sat Jan 12 22:55:39 CST 2008 <INFO> [Task [4-1200200137663]]
(DOWNLOAD 0) Download: finished!
Sat Jan 12 22:55:40 CST 2008 <INFO> [Task [4-1200200137663]]
(DISPATCHER) Dispatcher has set a task to approved.
Sat Jan 12 22:55:40 CST 2008 <INFO> [Task [4-1200200137663]]
(SORT1) Starting sort for /Storage/tmp/pre57303.tmp
Log files and auditing 267
About the log file directory

Sat Jan 12 22:55:42 CST 2008 <INFO> [Task [4-1200200137663]]


(DISPATCHER) Dispatcher has set a task to approved.
Sat Jan 12 22:55:42 CST 2008 <INFO> [Task [4-1200200137663]]
(SPLIT0) Starting to split
Sat Jan 12 22:55:42 CST 2008 <INFO> [Task [4-1200200137663]]
(SPLIT0) Splitting file
Sat Jan 12 22:55:42 CST 2008 <INFO> [Task [4-1200200137663]]
(SPLIT0) Generating dirfile
Sat Jan 12 22:55:42 CST 2008 <INFO> [Task [4-1200200137663]]
(SPLIT0) Done converting
Sat Jan 12 22:55:43 CST 2008 <INFO> [Task [4-1200200137663]]
(DISPATCHER) Dispatcher has set a task to approved.
Sat Jan 12 22:55:43 CST 2008 <INFO> [Task [4-1200200137663]]
(IMPORT0) ImportThread has work to do.
Sat Jan 12 22:55:43 CST 2008 <INFO> [Task [4-1200200137663]]
(IMPORT0) ImportThread going to import /Storage/tmp/bulkInsert57305
Sat Jan 12 22:55:43 CST 2008 <INFO> [Task [4-1200200137663]]
(IMPORT0) ImportThread going to import a raw POlist
Sat Jan 12 22:55:43 CST 2008 <INFO> [Task [4-1200200137663]]
(IMPORT0) ImportThread going to import a raw POlist
Sat Jan 12 22:55:43 CST 2008 <INFO> [Task [4-1200200137663]]
(IMPORT0) Import done
Sat Jan 12 22:55:44 CST 2008 <INFO> [Task [4-1200200137663]]
(DISPATCHER) Dispatcher has set a task to approved.
Sat Jan 12 22:55:44 CST 2008 <INFO> [Task [4-1200200137663]]
(EVAL0) Evaluating DataSelection 4.
Sat Jan 12 22:55:44 CST 2008 <INFO> [Task [4-1200200137663]]
(EVAL0) No duplicate or minor PO's were detected.
Sat Jan 12 22:55:44 CST 2008 <INFO> [Task [4-1200200137663]]
(EVAL0) DataSelection 4 succesfully evaluated.

If your log file is large, you can search for the information you want. For example,
type the following command to display all imports for data selection 7:
PureDisk:/Storage/log # grep 'Task \[7-' mbe.log

The metabase engine disk evaluator logs disk usage every 5 minutes. For example:

Thu Jan 17 17:28:57 CST 2008 <INFO> (DISKEVALUATOR0)


Evaluating left disk space
Thu Jan 17 17:28:58 CST 2008 <INFO> (DISKEVALUATOR0)
Diskspace used on partition with the metabase database is: 28.0%.
Thu Jan 17 17:29:00 CST 2008 <INFO> (DISKEVALUATOR0)
Diskspace used on partition with metabase tmp folder is: 28.0%.
268 Log files and auditing
About the log file directory

Workflow engine log file


PureDisk writes all workflow engine job, job step, and watchdog actions to file
/Storage/log/pdwfe.log. The following describe the workflow engine log file:

■ See “The pdwfe.log file” on page 268.


■ See “Logging and debugging options” on page 270.

The pdwfe.log file


The following is an example log file:

Thu Jan 17 2008 14:07:16.948539 INFO (1075325248):


Agent 'PureDisk' (id: 379000000): no jobstep found
Thu Jan 17 2008 14:08:17.142203 INFO (1074268480):
Agent 'PureDisk' (id: 379000000): no jobstep found
Thu Jan 17 2008 14:09:19.524282 INFO (1074796864):
Agent 'PureDisk' (id: 379000000): no jobstep found
Thu Jan 17 2008 14:10:02.150714 INFO (1075853632):
Run Policy 'scheduled' (id :106)
Thu Jan 17 2008 14:10:02.178834 INFO (1075853632):
Job 24: Created 'Files and Folders Backup' for Agent 'ros2pc00' (id: 2)
Thu Jan 17 2008 14:10:02.782948 INFO (1075325248):
Job 24: Return Jobstep 'PrepareBackup.php' (id: 155) to Agent 'ros2pc00' (id: 2)
Thu Jan 17 2008 14:10:07.152421 INFO (1074268480):
Job 24: Update status of jobstep 155 from RUNNING to SUCCESS
Thu Jan 17 2008 14:10:07.703998 INFO (1074796864):
Job 24: Return Jobstep 'ScanFilesystem.php' (id: 156) to Agent 'ros2pc00' (id: 2)
Thu Jan 17 2008 14:10:08.499284 INFO (1075853632):
Agent 'ros2pc00' (id: 2): no jobstep found
Thu Jan 17 2008 14:10:09.478683 INFO (1075325248):
Job 24: Update status of jobstep 156 from RUNNING to SUCCESS
Thu Jan 17 2008 14:10:09.980596 INFO (1074268480):
Job 24: Return Jobstep 'PutFiles.php' (id: 157) to Agent 'ros2pc00' (id: 2)
Thu Jan 17 2008 14:10:10.784255 INFO (1074796864):
Agent 'ros2pc00' (id: 2): no jobstep found

The pdwfe.log file contains information about the following common workflow
engine actions:
■ About the watchdog:

Thu Jan 17 2008 14:00:47.645449 INFO (1093708096): Running watch dog.


Thu Jan 17 2008 14:00:47.649075 INFO (1093708096): Watchdog Run successful.
Log files and auditing 269
About the log file directory

■ About agents when they request the next job step (nextJobStep web service):

Thu Jan 17 2008 14:05:16.667128 INFO (1074796864):


Agent 'PureDisk' (id: 379000000): no jobstep found

■ About job steps distributed over agents:

Thu Jan 17 2008 14:10:02.782948 INFO (1075325248):


Job 24: Return Jobstep 'PrepareBackup.php' (id: 155) to Agent 'ros2pc00' (id: 2)
Thu Jan 17 2008 14:10:07.152421 INFO (1074268480):
Job 24: Update status of jobstep 155 from RUNNING to SUCCESS

You can retrieve log information related to a single job. For example, to obtain
workflow engine log information related to job ID 24, type the following command:

PureDisk:/Storage/log # grep 'Job 24' pdwfe.log


Thu Jan 17 2008 14:10:02.178834 INFO (1075853632): Job 24:
Created 'Files and Folders Backup' for Agent 'ros2pc00' (id: 2)
Thu Jan 17 2008 14:10:02.782948 INFO (1075325248): Job 24:
Return Jobstep 'PrepareBackup.php' (id: 155) to Agent 'ros2pc00' (id: 2)
Thu Jan 17 2008 14:10:07.152421 INFO (1074268480): Job 24:
Update status of jobstep 155 from RUNNING to SUCCESS
Thu Jan 17 2008 14:10:07.703998 INFO (1074796864): Job 24:
Return Jobstep 'ScanFilesystem.php' (id: 156) to Agent 'ros2pc00' (id: 2)
Thu Jan 17 2008 14:10:09.478683 INFO (1075325248): Job 24:
Update status of jobstep 156 from RUNNING to SUCCESS
Thu Jan 17 2008 14:10:09.980596 INFO (1074268480): Job 24:
Return Jobstep 'PutFiles.php' (id: 157) to Agent 'ros2pc00' (id: 2)
Thu Jan 17 2008 14:10:11.406297 INFO (1075853632): Job 24:
Update Variables
Thu Jan 17 2008 14:10:11.411698 INFO (1075853632): Job 24:
Update Variables
Thu Jan 17 2008 14:10:11.739674 INFO (1074268480): Job 24:
Update Variables
Thu Jan 17 2008 14:10:12.770142 INFO (1074796864): Job 24:
Update Variables
Thu Jan 17 2008 14:10:12.974254 INFO (1075325248): Job 24:
Update Variables
Thu Jan 17 2008 14:10:12.998051 INFO (1075325248): Job 24:
Update Variables
Thu Jan 17 2008 14:10:14.314949 INFO (1074268480): Job 24:
Update status of jobstep 157 from RUNNING to SUCCESS_WITH_ERRORS
270 Log files and auditing
About the log file directory

Thu Jan 17 2008 14:10:14.674194 INFO (1074796864): Job 24:


Return Jobstep 'MBImportAction.php' (id: 158) to Agent 'PureDisk' (id: 379000000)
Thu Jan 17 2008 14:10:24.548564 INFO (1074796864): Job 24:
Update Variables
Thu Jan 17 2008 14:10:24.732565 INFO (1075325248): Job 24:
Update Variables
Thu Jan 17 2008 14:10:25.044541 INFO (1075853632): Job 24:
Update status of jobstep 158 from RUNNING to SUCCESS
Thu Jan 17 2008 14:10:25.428563 INFO (1074268480): Job 24:
Return Jobstep 'ProcessJobStatistics.php' (id: 159) to Agent 'PureDisk' (id: 379000000)
Thu Jan 17 2008 14:10:26.186136 INFO (1075325248): Job 24:
Update Variables
Thu Jan 17 2008 14:10:26.602925 INFO (1075853632): Job 24:
Update status of jobstep 159 from RUNNING to SUCCESS
Thu Jan 17 2008 14:10:27.153563 INFO (1074796864): Job 24:
Return Jobstep 'FinishBackup.php' (id: 160) to Agent 'ros2pc00' (id: 2)
Thu Jan 17 2008 14:10:29.298115 INFO (1075853632): Job 24:
Update status of jobstep 160 from RUNNING to SUCCESS
Thu Jan 17 2008 14:10:29.504469 INFO (1085315392): Job 24:
Process Workflow Engine Job Step 'markexit' (id :161)
Thu Jan 17 2008 14:10:29.514389 INFO (1085315392): Job 24:
Workflow Engine: Step 161 returns SUCCESS

Logging and debugging options


You can increase the amount of information that PureDisk writes to
/Storage/log/pdwfe.log. To increase the amount of logging information that
PureDisk writes, log into the storage pool authority and type the following
commands:

# /etc/init.d/puredisk stop pdworkflowd


# /opt/pdwfe/bin/pdwfe --trace

Server agent log files


PureDisk logs all server agent actions to /Storage/log/Agent.log. PureDisk logs
all job step logs to /Storage/tmp/workflow.XXXX, where XXXX is the job step
ID. The following describe the server agent log files:
■ See “The Agent.log file” on page 271.
■ See “The job step log” on page 271.
■ See “Logging and debugging options” on page 272.
Log files and auditing 271
About the log file directory

The Agent.log file


The following is an example of an Agent.log file:

Thu Jan 17 2008 16:10:23.899687 INFO (1074796864):


Incoming request: kick
Thu Jan 17 2008 16:10:24.512531 INFO (1080609088):
Jobstep: ProcessJobStatistics.php
Thu Jan 17 2008 16:10:24.512951 INFO (1080609088):
Logfile path: '/Storage/tmp/workflow.172'
Thu Jan 17 2008 16:10:25.053758 INFO (1080609088):
Updating status for job step #172
Thu Jan 17 2008 16:10:25.054380 INFO (1080609088):
Upload logfile /Storage/tmp/workflow.172 (1382 bytes) using SPA webservice.
Thu Jan 17 2008 16:10:25.575934 INFO (1080609088):
Jobstep 172 successfully set to status 2

Thu Jan 17 2008 17:10:15.702997 INFO (1074268480):


Incoming request: kick
Thu Jan 17 2008 17:10:16.433983 INFO (1077438784):
Jobstep: MBImportAction.php
Thu Jan 17 2008 17:10:16.434432 INFO (1077438784):
Logfile path: '/Storage/tmp/workflow.178'
Thu Jan 17 2008 17:10:25.277339 INFO (1077438784):
Updating status for job step #178
Thu Jan 17 2008 17:10:25.278032 INFO (1077438784):
Upload logfile /Storage/tmp/workflow.178 (1445 bytes) using SPA webservice.
Thu Jan 17 2008 17:10:25.806503 INFO (1077438784):
Jobstep 178 successfully set to status 2

In this example, there are two job steps processes: ProcessJobStatistics and
MBImportAction. All log lines that relate to these job steps have the same thread
ID: 1080609088 for ProcessJobStatistics and 1077438784 for MBImportAction.

The job step log


Each job step that runs creates a job step log on the local agent. This log file is
loaded in the administrator Web UI for the job details. The following procedure
explains how to find information about a job step.
272 Log files and auditing
About the log file directory

To find a job step ID for a running job


1 Click Monitor > Jobs.
2 In the right pane, click the number in the Job Id column that corresponds to
the job that contains the job step that you want to examine.
3 In the pop-up that appears, click the Details tab.
4 On the Details tab, click on the row that describes the job step you want to
examine.
5 On the left pane of the Details tab, note the jobid information.
If necessary, use the pull-down menu to select Normal (the default), Verbose,
Very Verbose, or Show All to display differing amounts of information.

Logging and debugging options


By default, the PureDisk agent removes all job scripts and job logs when a job step
finishes. If you want to retain these files on a particular client system, edit the
agent.cfg file on that particular client.

The location of this file differs depending on your platform. For example, on a
Windows client, agent.cfg is located in install_dir\Program
Files\Symantec\NetBackup PureDisk Agent\etc\agent.cfg. When you edit
this file, go into the debug section, and set the debug parameter to 1.

About international characters in log files


The PureDisk log files contain up to 1000 international characters under the
following conditions:
■ If you use international characters to specify names, descriptions, and other
labels in the storage pool
■ If the PureDisk agent is installed on a localized client
PureDisk displays these characters correctly when you view log files, such as job
logs, through the Web UI. You can view the server logs stored in /Storage/log
on a PureDisk node. However, you might need to make some configuration changes
depending on where and how you want to view these log files.
These configuration changes are as follows:
■ If you log on to the PureDisk node with a secure shell connection (SSH) on
Linux or UNIX, make sure you use a UTF-8 locale. For example, use
en_US.UTF-8.
Log files and auditing 273
Audit trail reporting

■ You can log on to the PureDisk node with a Windows terminal client such as
Putty. Ensure that the terminal client uses the UTF-8 character set and a font
that contains the international characters that you need to display.
■ If you log on to the PureDisk node directly through the console, PureDisk does
not display international characters properly. Use one of the previous methods
to view log files with international characters.

Audit trail reporting


The audit trail report shows a list of users and storage pool activities. You must
be logged in as root to retrieve audit log information.

Figure 10-1 Example audit trail report


274 Log files and auditing
Setting debugging mode

To generate an audit trail report


1 Click Manage > Agent.
2 In the left pane, select World.
3 In the right pane, click Show Audit Trail or Download Audit Trail.
Alternatively, you can click Show Audit Trail and then click Download Audit
Trail if you decide later that you want to download the information.
The following information pertains to the output formats available to you
from the right pane:
■ If you click Download Audit Trail, follow the instructions in the dialog
boxes that appear. PureDisk downloads the report to a compressed file
that ends in .tgz.
■ If you click Show Audit Trail, the audit trail appears in the right pane.
The following three icons appear above the Object Name column:
■ The printer icon. If you click the printer icon, follow the instructions
in the dialog boxes that appear to select a printer and send the report
to that printer.
■ The spreadsheet icon. If you click the spreadsheet icon, a dialog box
appears. Click OK. Follow the instructions in the next dialog box to
write or save these files.
■ The refresh button.

Setting debugging mode


When you enable debugging mode through the PureDisk Web UI, a PureDisk agent
provides detailed log information on that client agent or server agent. Also, when
enabled, temporary scripts and log files remain in place on a client agent or server
agent. Typically, PureDisk removes these files after they are no longer needed,
but when you enable debugging mode, PureDisk leaves them in place. For example,
you can enable debugging mode to troubleshoot failing jobs for a particular client
agent.
Typically, Symantec CFT or technical support requests that you enable this
capability in the storage pool while troubleshooting. Do not enable debugging
mode for general use.
The following procedures explain how to use debugging mode:
■ See “Enabling debugging mode” on page 275.
■ See “Disabling debugging mode” on page 276.
Log files and auditing 275
Setting debugging mode

■ See “Removing temporary debugging files” on page 276.

Enabling debugging mode


The following procedures explain how to enable debugging mode on a client agent
or on a server agent.
To enable debugging mode on a client agent
1 Click Manage > Agent.
2 In the left pane, select the agent for which you want to enable debug mode.
3 In the right pane, pull down More Tasks, and select Set Debug Mode.
4 Run the job that you want to troubleshoot.
5 Analyze the log files.
The location of the temporary debugging files depends on your platform, as
follows:
■ On Windows platforms, PureDisk writes debugging files to
install_dir\Symantec\NetBackup PureDisk Agent\tmp. For example,
C:\ Program Files\Symantec\NetBackup PureDisk Agent\tmp.
■ Linux, UNIX, or MacOS, PureDisk writes debugging files to
/opt/pdagent/tmp.

6 Disable debugging mode.


For information about how to disable debugging mode, see the following:
See “Disabling debugging mode” on page 276.

To enable debugging mode on a server agent


1 Click Settings > Topology.
2 In the left pane, select the server agent for which you want to enable debug
mode.
The node identifiers appear under the storage pool name. For example, if
there are multiple server agents, multiple node identifiers appear under the
storage pool name in the left pane. The node identifier can be an FQDN, host
name, or IP address.
3 In the right pane, select Set Debug Mode.
4 Run the job that you want to troubleshoot.
276 Log files and auditing
Setting debugging mode

5 Analyze the log files in /Storage/tmp.


6 Disable debugging mode.
For information about how to disable debugging mode, see the following:
See “Disabling debugging mode” on page 276.

Disabling debugging mode


Perform the following procedure to disable debugging mode. If you restart an
agent, that action also disables debugging mode.
To disable debugging mode on a client agent
1 Click Manage > Agent.
2 In the left pane, select the agent for which you want to disable debug mode.
3 In the right pane, pull down More Tasks, and select Reset Debug Mode.
4 Remove the temporary debugging files.
For information about how to remove the temporary files, see the following:
See “Removing temporary debugging files” on page 276.
To disable debugging mode on a server agent
1 Click Settings > Topology.
2 In the left pane, select the server agent for which you want to disable
debugging mode.
The node identifiers appear under the storage pool name. For example, if
there are multiple server agents, multiple node identifiers appear under the
storage pool name in the left pane. The node identifier can be an FQDN, host
name, or IP address.
3 In the right pane, select Reset Debug Mode.
4 Remove the temporary debugging files.
For information about how to remove the temporary files, see the following:
See “Removing temporary debugging files” on page 276.

Removing temporary debugging files


Both of the following procedures remove temporary debugging files from an agent.
The second method takes more time and resources because it runs the System
policy for Maintenance on the storage pool.
Log files and auditing 277
Setting debugging mode

To remove temporary files - method 1


1 Change to the directory that contains the temporary files.
The location of the temporary debugging files depends on your platform, as
follows:
■ On Windows platforms, PureDisk writes debugging files to
install_dir\Symantec\NetBackup PureDisk Agent\tmp. For example C:\
Program Files\Symantec\NetBackup PureDisk Agent\tmp.
■ Linux, UNIX, or MacOS, PureDisk writes debugging files to
/opt/pdagent/tmp.

2 Use operating system commands to remove the temporary debugging files.


To remove temporary files - method 2
1 Click Manage > Policies.
2 In the left pane, under Storage Pool Management Policies, expand
Maintenance.
3 Select System policy for Maintenance.
4 In the right pane, click Run Policy.
This action runs the System policy for Maintenance on the entire storage
pool, not just on the agent you debugged. The policy removes the scripts and
the temporary files that PureDisk writes to the agent during debugging.
278 Log files and auditing
Setting debugging mode
Chapter 11
Storage pool management
This chapter includes the following topics:

■ About storage pool management

■ About adding services

■ Adding a service to a node

■ Activating a new service in the storage pool

■ Rerouting a content router and managing content routers

■ Deactivating a service

■ Managing license keys

■ About central reporting

■ Rerouting a metabase engine

■ About clustered storage pool administration

■ Changing the PDOS administrator’s password

■ Changing the PureDisk internal database and the LDAP administrator


passwords

■ Increasing the number of client connections

■ Adjusting the clock on a PureDisk node

■ Adjusting the Web UI time-out interval

■ Stopping and starting processes on one PureDisk node (unclustered)

■ Stopping and starting processes on one PureDisk node (clustered)

■ Stopping and starting processes in a multinode PureDisk storage pool


280 Storage pool management
About storage pool management

■ Restarting the Java run-time environment

About storage pool management


The following describe how to perform tasks to manage stroage pool components
and services:
■ See “About adding services” on page 280.
■ See “Adding a service to a node” on page 282.
■ See “Activating a new service in the storage pool” on page 287.
■ See “Rerouting a content router and managing content routers” on page 288.
■ See “Deactivating a service” on page 296.
■ See “Managing license keys” on page 299.
■ See “About central reporting” on page 300.
■ See “Rerouting a metabase engine” on page 304.
■ See “About clustered storage pool administration” on page 310.
■ See “Changing the PDOS administrator’s password” on page 310.
■ See “Changing the PureDisk internal database and the LDAP administrator
passwords” on page 311.
■ See “Increasing the number of client connections” on page 311.
■ See “Adjusting the clock on a PureDisk node” on page 312.
■ See “Adjusting the Web UI time-out interval” on page 314.
■ See “Stopping and starting processes on one PureDisk node (unclustered)”
on page 314.
■ See “Stopping and starting processes on one PureDisk node (clustered)”
on page 317.
■ See “Stopping and starting processes in a multinode PureDisk storage pool”
on page 318.
■ See “Restarting the Java run-time environment” on page 319.

About adding services


You might need to reconfigure your storage pool if your data protection needs
change or if your PureDisk system reaches its capacity.
Storage pool management 281
About adding services

For example, you might need to add nodes or services. To determine when to add
additional services, perform the following tasks on a regular basis:
■ Examine events from the system monitor script.
The system monitor script monitors system activity. By default, it runs every
five minutes and sends a status message. To see the messages, click Monitor
> Alerts & Notification. In the right pane, pull down Application, and type
MonitorStatistics in the Look for: field.
For more information, see the following:
The PureDisk Backup Operator's Guide.
■ Display capacity dashboards.
For more information, see the following:
See “About Dashboard reports” on page 249.
For example, you might need to add the following additional services:
■ Metabase engine.
One metabase engine service can support 1,000 clients. Add an additional
metabase engine service if your site needs to support more than 1,000 clients.
As you add new clients, PureDisk assigns them to the new metabase engine.
It does not move clients from one metabase engine to another metabase engine.
■ Content router.
Add an additional content router service if the /Storage/data partition fills.
The system monitor script’s report and the capacity dashboard include
information on the disks that have reached their capacity. For more information
about adding content routers and content router rerouting, see the following:
See “Rerouting a content router and managing content routers” on page 288.
■ A NetBackup export engine.
This service lets you send content router data to a NetBackup storage unit.
The following explain how to perform reconfiguration tasks:
■ See “Adding a service to a node” on page 282.
■ See “Activating a new service in the storage pool” on page 287.
■ See “Rerouting a content router and managing content routers” on page 288.
■ See “Deactivating a service” on page 296.
■ See “Managing license keys” on page 299.
■ See “About central reporting” on page 300.
■ See “Rerouting a metabase engine” on page 304.
■ See “About clustered storage pool administration” on page 310.
282 Storage pool management
Adding a service to a node

■ See “Changing the PDOS administrator’s password” on page 310.


■ See “Changing the PureDisk internal database and the LDAP administrator
passwords” on page 311.
■ See “Increasing the number of client connections” on page 311.
■ See “Adjusting the clock on a PureDisk node” on page 312.
■ See “Adjusting the Web UI time-out interval” on page 314.

Adding a service to a node


You can add a new service on a new, standalone node or on a node that presently
hosts other services. The following explain how to add a new content router,
metabase engine, or NetBackup export engine:
■ See “Adding a new service on an existing node” on page 282.
■ See “Adding a new node and at least one new service on the new node”
on page 284.
■ See “Verifying and specifying content router capacity” on page 285.
■ See “Adding a new passive node to a cluster” on page 286.
For information about how to add services to an unclustered storage pool, see the
following:

Adding a new service on an existing node


The following procedure explains how to add a new service to an existing service
group on an existing node in a storage pool.
To add a new service on an existing node
1 (Conditional) Freeze the service group on the node to which you want to add
the new service.
Perform this step if the storage pool is clustered.
This action prevents VCS from failing over the node when it restarts the
server process. To freeze the node, use the Cluster Manager Java console.
2 In a browser window, type the following to start the storage pool configuration
wizard:

http://URL/Installer

For URL, type the FQDN of the node that hosts the storage pool authority
service.
Storage pool management 283
Adding a service to a node

3 Click Next on the wizard's pages until you arrive at the Services Configuration
page.
4 On the Service Configuration page, perform the following steps:
■ Click Change.
■ Select the service you want to add.
■ Click Next when the Services Configuration page is complete.

5 Click Next until you arrive at the Implementation page.


6 On the Implementation page, click Finish.
7 (Conditional) Visually inspect the Cluster Manager Java Console Web UI and
check for fault conditions.
Perform this step if the storage pool is clustered.
VCS might have detected that the service is down. In this case, the resource
might appear as faulted in the PureDisk Web UI.
If the resource appears as faulted, complete the following steps:
■ Right-click the resource and select Clear Fault - Auto.
■ After you clear the fault, the resource appears as Offline. Although it is
started again, specify to VCS that you want it to monitor the resource. To
enable monitoring again, right-click the resource and select probe -
node-name.

8 (Conditional) Visually inspect the Cluster Manager Java Console and make
sure that all resources now appear with a status of Online.
Perform this step if the storage pool is clustered.
9 (Conditional) In the Cluster Manager Java Console, right-click the service
group and select Unfreeze.
Perform this step if the storage pool is clustered.
10 (Conditional) Verify that the service was added successfully.
Perform this step if you added a content router or a metabase engine.
Proceed as follows:
■ If you added a content router, see the following:
See “Verifying and specifying content router capacity” on page 285.
■ If you added a metabase engine or a NetBackup export engine, see the
following:
See “Activating a new service in the storage pool” on page 287.
284 Storage pool management
Adding a service to a node

Adding a new node and at least one new service on the new node
The following procedure explains how to add a new node and a service.
When you add new nodes to a clustered storage pool, make sure to add them one
at a time.
To add a service to a new node
1 Install PDOS on the computer that you want to configure as a new node.
Use the instructions in the PureDisk Storage Pool Installation Guide to install
PDOS.
2 In a browser window, type the following to start the storage pool configuration
wizard:

http://URL/Installer

For URL, type the FQDN of the node that hosts the storage pool authority
service.
3 Click Next until you arrive at the Storage Pool Node Summary page.
4 Visually inspect the Storage Pool Node Summary page and determine if the
new node appears.
If the new node does not appear, click Add Node and add the node. Use the
instructions in the PureDisk Storage Pool Installation Guide to add the node.
5 Click Next until you arrive at the Storage Selection pages.
Use the instructions in the PureDisk Storage Pool Installation Guide to
configure storage for this node. If no disks appear in the wizard, it might be
because your disks need to be formatted or repartitioned.
6 Click Next until you arrive at the Services Configuration page.
Use the instructions in the PureDisk Storage Pool Installation Guide to
configure services on this node.
7 Click Next until you arrive at the Implementation page.
8 On the Implementation page, click Finish.
9 (Conditional) If TCP/IP settings on the other nodes have been changed to
improve replication job performance, run the following script on the new
node:

# /opt/pdconfigure/scripts/support/tcp_tune.sh modify

10 (Conditional) Verify that the service was added successfully.


Perform this step if you added a content router or a metabase engine.
Storage pool management 285
Adding a service to a node

Proceed as follows:
■ If you added a content router, see the following:
See “Verifying and specifying content router capacity” on page 285.
■ If you added a metabase engine or a NetBackup export engine, see the
following:
See “Activating a new service in the storage pool” on page 287.

Verifying and specifying content router capacity


Perform the procedure in this section under the following circumstances:
■ You added a content router to an existing storage pool.
■ You configured a new storage pool and your content routers have different
capacities.
You do not need to perform this procedure if you added a metabase engine or
NetBackup export engine.
PureDisk assumes that all content routers have the same storage capacity.
However, you may have content routers each with different capacities. If you do
not specify the content router capacities explicitly, the content router with the
smallest capacity fills up first. As a result, you must add another content router
sooner.
To verify and specify content router capacity
1 Click Settings > Topology.
2 In the left pane, expand the tree until you see all the content routers.
3 Select a content router.
4 In the right pane, visually inspect the Storage size (GB) field.
5 (Conditional) Specify the content router’s capacity in the Storage size (GB)
field.
Perform this step if the displayed capacity is incorrect.
PureDisk uses the information in this field when it determines the fingerprint
range to assign to each content router. After you change this value, PureDisk
redistributes the fingerprint ranges relative to the new capacity specifications.
If the content routers already contain data, PureDisk redistributes the data,
too.
6 Click Save to save your changes.
286 Storage pool management
Adding a service to a node

7 Perform step 3 through step 6 for each content router in the storage pool.
8 Your next action depends on which type of service you changed or added, as
follows:
If you edited the information for an active content router, perform the
procedure in the following section:
See “Rerouting a content router and managing content routers” on page 288.
If you edited the information for the content routers that you installed as
part of a new storage pool, perform the procedure in the following section:
See “Rerouting a content router and managing content routers” on page 288.

Adding a new passive node to a cluster


Perform this procedure if you want to add an additional passive node to a cluster.
The following procedure assumes that, as for all passive nodes, you do not intend
to install any active services on this node. In this case, you want only to extend
the nodes available to the cluster for failover.
When you add new passive nodes to a clustered storage pool, make sure to add
them one at a time.
To add a new passive node to a cluster
1 Install PDOS on the computer that you want to configure as a new node.
Use the instructions in the PureDisk Storage Pool Installation Guide to install
PDOS.
2 In a browser window, type the following to start the storage pool configuration
wizard:

http://URL/Installer

For URL, type the FQDN of the node that hosts the storage pool authority
service.
3 Click Next until you arrive at the Storage Pool Node Summary page.
4 Visually inspect the Storage Pool Node Summary page and determine if the
new node appears.
If the new node does not appear, click Add Node and add the node. Use the
instructions in the PureDisk Storage Pool Installation Guide to add the node.
When the new node appears in the node summary, perform the following
steps:
■ Select the node you want to configure as a passive node.
Storage pool management 287
Activating a new service in the storage pool

■ Click Edit Node.


■ In the Node Type pull-down menu, select Passive.
■ Click OK.

5 Click Next until you arrive at the Implementation page.


6 On the Implementation page, click Finish.
7 Type the following command to initialize the new passive node:

# /opt/pdinstall/prepare_additionalNode.sh addr[,addr,...]

For addr, type the IP address of the public NIC on the new node.

8 (Conditional) If TCP/IP settings on the other nodes have been changed to


improve replication job performance, run the following script on the node
you added:

# /opt/pdconfigure/scripts/support/tcp_tune.sh modify

9 Use the Cluster Manager Java Console to perform a manual failover to the
new node.
Symantec recommends that you test a manual failover to this new node at a
time that is convenient in your schedule. When you perform a manual failover,
your storage pool will be temporarily offline. See the instructions on how to
perform a manual failover in the Veritas Cluster Server (VCS) documentation.

Activating a new service in the storage pool


Perform this procedure to activate a new service in the storage pool.
To activate a new service in the storage pool
1 Click Settings > Topology.
2 In the left pane, expand the tree view until you see the new service.
3 Select the new service.
4 In the right pane, click one of the following:
■ Activate Content Router
■ Activate Metabase Engine
■ Activate NetBackup Export Engine
If you added a new metabase engine or a new NetBackup export engine, you
are finished. Do not complete the rest of this procedure.
288 Storage pool management
Rerouting a content router and managing content routers

If you added a new content router, proceed to the next step.


5 (Conditional) If you added a new content router, confirm whether to reroute
the storage pool at this time.
Select one of the following options:
■ Yes, reroute now.
This selection starts the rerouting process. Select this choice only after
you activate all the content routers you intend to activate. You want to
reroute only on time, and you want to reroute only after you activate all
the new content routers.
■ No, I want to continue making changes.
If you select this choice, make your changes, activate additional content
routers, and proceed to the following section:
See “Rerouting a content router and managing content routers”
on page 288.

Note: When you reroute the storage pool, PureDisk moves data between
content routers. This process requires some free storage space on each of the
content routers. If a content router has no more storage available, your
rerouting might take much longer. Determine whether to run your data
selection removal policies and data removal policies to free some storage
space before you start the rerouting process.

6 (Conditional) Click OK.

Rerouting a content router and managing content


routers
Rerouting distributes information to the content routers in a storage pool. When
you add a new content router, for example, you need to reroute all the storage
pool's content routers in order to distribute the data evenly among the content
routers.
The following sections provide information about planning content routers and
rerouting content router data:
■ See “Planning for a new content router” on page 289.
■ See “Permissions for rerouting” on page 290.
■ See “Disaster recovery backups and rerouting” on page 290.
■ See “Data replication policies and rerouting” on page 290.
Storage pool management 289
Rerouting a content router and managing content routers

■ See “Activating and deactivating content routers” on page 290.


■ See “Alleviating content router congestion” on page 291.
■ See “Parallel and serial rerouting examples” on page 292.
■ See “Rerouting the content routers” on page 294.
■ See “Troubleshooting a content router rerouting job” on page 295.

Planning for a new content router


After you add a new content router to a storage pool, you activate the new content
router and reroute the data on your content routers. The rerouting process
redistributes the data evenly across all activated content routers in a storage pool.
During the rerouting process, the content routers still send and receive data. The
send/receive action can be done in parallel mode or in serial mode, as follows:
■ When parallel rerouting is performed, all content routers actively redistribute
data simultaneously. Parallel rerouting is faster than serial rerouting, but
parallel rerouting is not always possible.
■ When serial rerouting is performed, only one content router at a time
redistributes its data.
PureDisk’s ability to perform parallel rerouting depends on the following factors:
■ The current capacity of the content routers in the storage pool.
If your content routers have very little excess capacity, PureDisk performs
serial rerouting. PureDisk employs this method because it requires excess
capacity on each content router when it moves the data between content
routers.
If you receive warning messages from PureDisk about your content router
capacity, you can assume that the routers are near capacity. If your content
routers are near capacity, PureDisk is more likely to perform serial routing.
If your content routers are at a low capacity, PureDisk is more likely to perform
parallel rerouting. The excess capacity on the content routers allows PureDisk
to move the data between content routers more efficiently.
■ The number of content routers you add to the storage pool.
If possible, always add enough content routers to make parallel rerouting
possible. Generally, if you can double the number of content routers in a storage
pool, you request parallel rerouting.
For example, assume that you add only one content router to a storage pool
that contains three or more content routers. Each router is near capacity. In
that case, PureDisk reroutes by using the serial method. However, if you add
two or three content routers, you can request parallel rerouting. For more
information about rerouting, see the following:
290 Storage pool management
Rerouting a content router and managing content routers

See “Parallel and serial rerouting examples” on page 292.

Permissions for rerouting


The following permissions are required for a user to be able to reroute the content
routers:
■ Topology management permission.
■ Activate & deactivate permission.
■ Reroute.
For information about permissions, see the PureDisk Client Installation Guide.

Disaster recovery backups and rerouting


You cannot run a disaster recovery backup while any content routers are in an
inactive state. Before you add, activate, or deactivate a content router, consider
whether you want to perform a disaster recovery backup first.
Make sure the rerouting process completes before you run a disaster recovery
backup. Disaster recovery backups fail if any content router in the storage pool
is in the inactive state.

Data replication policies and rerouting


Replication jobs and content router rerouting jobs cannot run simultaneously. If
you start a replication job and then start a rerouting job, PureDisk stops the
replication job.

Activating and deactivating content routers


If you accidentally activate a content router, you do not have to reroute first. As
long as the content router is in the Activation requested state, you can deactivate
it without rerouting.
Conversely, if you accidentally deactivate a content router, you can activate it
without rerouting. However, the content router must still be in the Deactivation
requested state.
If the rerouting stopped for any reason, you need to correct the problem that
caused the stoppage and restart the rerouting.
In this case, the content routers that you tried to activate or deactivate have one
of the following states:
■ Activation requested
Storage pool management 291
Rerouting a content router and managing content routers

You have requested activation of this content router, but have not yet started
rerouting.
■ Deactivation requested
You have requested deactivation of this content router, but have not yet started
rerouting.
■ Activation pending
During rerouting, content routers that you have activated change from the
state "Activation requested" to "Activation pending" as soon as the actual
rerouting of data starts.
■ Deactivation pending
During rerouting, content routers that you have deactivated change from the
state "Deactivation requested" to "Deactivation pending" as soon as the actual
rerouting of data starts.
■ Active
This content router is active.
■ Inactive
This content router is inactive.
In this case, you can still make changes, either to activate or to deactivate the
content router, before you start the rerouting process again. However, try to avoid
such situations because they result in unnecessary data movement between
content routers.

Alleviating content router congestion


PureDisk sends messages when the content routers start filling. These events
appear when the system starts to fill up.
The following are the three message levels:
■ First level. The content routers have started to fill.

Message The Content Router is starting to run low on disk


space. Rerouting, data removal and/or garbage
collection is advised.

■ Second level. Backups stop when the first content router in a storage pool
reaches this level. This level is the warning threshold.
292 Storage pool management
Rerouting a content router and managing content routers

Message The Content Router has insufficient disk space to


accept new data. No new data will be accepted
until more disk space becomes available.
Rerouting, data removal and/or garbage collection
is needed urgently.

■ Third level. Data from the spool area can continue to fill the content routers
even after the backups stop. The content routers are full.

Message The Content Router has insufficient disk space to


accept new data. No new data will be accepted
until more space becomes available. Rerouting,
data removal and/or garbage collection is needed
urgently. Manual intervention to temporarily free
disk space may become necessary.

If your content routers fill up, perform one or more of the following actions:
■ First, run a data removal policy. If you know that you have a lot of unneeded
data on the content router, this process frees up needed space. For information
on data removal policies, see the PureDisk Backup and Restore Guide.
■ Second, add another content router and reroute your data. Because you have
full content routers, this process is very slow. Use the procedures in this
chapter, and perform this action if the data removal policy did not free up
enough space.
■ Third, call Symantec technical support.

Parallel and serial rerouting examples


The following summarizes the situations in which parallel and serial rerouting
can occur:
■ If you have little excess capacity in your content routers, PureDisk performs
serial rerouting, regardless of how much more capacity you add.
■ If you have much excess capacity in your content routers and much new
capacity in the new content routers, Symantec recommends that you select
parallel rerouting.
While it is not possible to guarantee circumstances under which parallel rerouting
always occurs, the following provide example scenarios:
■ See “Example 1 - Serial rerouting scenario” on page 293.
■ See “Example 2 - Parallel rerouting scenario” on page 293.
Storage pool management 293
Rerouting a content router and managing content routers

■ See “Other rerouting examples” on page 293.

Example 1 - Serial rerouting scenario


Assume the following:
■ You have three 4-TB content routers. Their total maximum capacity is 12 TB.
■ Each content router is at 80% capacity. 9.6 TB of data reside on these three
content routers. The routers have 2.4 TB of excess capacity today.
■ You want to add one more 4-TB content router.
In this case, the capacity you want to add (4 TB) is not significantly more than the
excess capacity that exists today (2.4 TB). PureDisk performs serial rerouting.

Example 2 - Parallel rerouting scenario


Assume the following:
■ You have four 500-GB content routers. Their total maximum capacity is 2000
GB.
■ Each content router is at 50% capacity. 1000 GB of data reside on these four
content routers. The routers have 1000 GB of excess capacity today.
■ You want to add four more 500-GB content routers. These routers are to
increase the capacity of the storage pool by 2000 GB.
In this case, the capacity you want to add (2000 GB) is significantly more than the
excess capacity that exists today (1000 GB). Symantec recommends parallel
rerouting.

Other rerouting examples


Table 11-1 shows some additional examples of parallel and serial rerouting
situations.

Table 11-1 Rerouting examples

Current number of Amount to add Parallel or Serial


content routers rerouting?

1 1 Parallel

2 1 or 2 Parallel

3 at high capacity 1 Serial

3 at low capacity 1 Parallel


294 Storage pool management
Rerouting a content router and managing content routers

Table 11-1 Rerouting examples (continued)

Current number of Amount to add Parallel or Serial


content routers rerouting?

3 2, 3, or 4 Parallel

4 1 Serial

4 4 Parallel

Rerouting the content routers


Perform this procedure if you added a new content router. This procedure
redistributes the stored data over all the available, active content routers in the
storage pool.
To reroute data
1 Click Settings > Topology.
2 In the left pane, select the storage pool.
3 Expand the storage pool until the PureDisk Web UI displays the content router
you added.
4 Select the content router you added.
5 In the right pane, click Activate Content Router.
6 Specify whether you want to reroute now or later.
For example, respond Yes, reroute now if you have added the last new content
router or the only new content router to this storage pool. Respond No,
reroute later if you added more than one content router, and you need to
activate other new content routers.
7 (Conditional) In the left pane, select the storage pool.
Perform this step if the correct storage pool is not already selected.
8 (Conditional) In the right pane, click Reroute Content Router.
Perform this step if the correct storage pool was not already selected.
Storage pool management 295
Rerouting a content router and managing content routers

9 Wait for the rerouting process to complete.


This action redistributes the data across the content routers in the storage
pool. PureDisk messages indicate whether you can perform parallel rerouting
or serial rerouting. If possible, always choose parallel rerouting.
Rerouting can take some time to complete. For example, it can take several
hours to even several days, depending on the volume of data that needs to
be rerouted. If possible, keep other system activity, such as backup and
removal jobs, low during this activity.
Each content router receives a rerouting job. Look in the job table to make
sure that all rerouting jobs complete successfully. If a job fails, analyze the
problem through the job log, correct the problem if possible, and retry
rerouting as soon as possible. If the second try is unsuccessful, contact your
Symantec technical support representative.
10 (Optional) Enable an escalation action for the rerouting workflow.
Enable an escalation action if you want PureDisk to send you email if the
workflow does not complete in a reasonable amount of time.
For information about escalation actions, see the PureDisk Backup Operator's
Guide.
11 Perform a full disaster recovery backup.
Perform a full disaster recovery backup at this time. Do not perform an
incremental backup.

Troubleshooting a content router rerouting job


The following procedure explains how to troubleshoot a failed rerouting job.
To troubleshoot a content router rerouting job
1 Retrieve the job log for the failed rerouting job.
Perform the following steps:
■ Click Monitor > Jobs.
■ Click the job number of the failed rerouting job.
■ On the job details display, click Job Log.

2 Examine the job log for network errors or other environmental factors.
296 Storage pool management
Deactivating a service

3 (Conditional) Fix external conditions.


If the job log noted network errors or environmental factors that contributed
to the job's failure, remedy those conditions.
4 Rerun the rerouting job.
Even if there were no external conditions for you to remedy, run the job again.
If repeated efforts to remedy environmental problems and to run the job
successfully have failed, please contact Symantec technical support.

Deactivating a service
The following procedure explains how to deactivate a content router or a
NetBackup export engine in a storage pool. Other services cannot be deactivated.
Before you deactivate a content router, check the capacity of the other content
routers in your storage pool. For information about how to check this capacity,
see the following section:
Preparing to deactivate a content router

Preparing to deactivate a content router


After PureDisk deactivates a content router, it reroutes the storage pool to
distribute the router’s data over the remaining content routers in the storage
pool. Before you deactivate a content router, verify that there is enough storage
space on the remaining content routers to hold all the deactivated router’s data.
Allow for a 10% to 20% margin.
If sufficient space is unavailable, do not deactivate the content router. Without
sufficient space, the deactivation and subsequent rerouting fails. The failures can
leave the entire storage pool in an inoperable state.
If you accidentally deactivate a content router but have not started rerouting yet,
you can activate the content router again.
Use the following procedure to prepare for content router deactivation.
Storage pool management 297
Deactivating a service

To prepare to deactivate a content router


1 Display the capacity dashboard.
See “Displaying the Capacity dashboard” on page 250.
2 Review the following examples:
See “Example 1 - Content routers that can be rerouted” on page 297.
See “Example 2 - Content routers that cannot be rerouted” on page 297.
3 Examine the dashboard to determine whether remaining content routers
have sufficient excess capacity to ensure that the rerouting can complete.

Example 1 - Content routers that can be rerouted


Assume that you have a storage pool with the following three content routers:
■ Content router 1 has 1 TB capacity and is 60% filled. In other words, 600 GB
are used, and 400 GB are free.
■ Content router 2 has 2 TB capacity and is 60% filled. In other words, 1200 GB
are used, and 800 GB are free.
■ Content router 3 has 1 TB capacity and is 60% filled. In other words, 600 GB
are used, and 400 GB are free.
Option 1. Assume that you want to remove content router 2 by deactivating it and
rerouting the storage pool. To reroute the storage pool, you must reroute the 1200
GB of data on content router 2 to content routers 1 and 3. However, this approach
cannot work because together content routers 1 and 3 have only 800 GB of free
space.
Option 2. Assume that you want to remove content router 3 or content router 1.
For each of these content routers, ample free space exists on the remaining content
routers to hold the 600 GB of data from the source.

Example 2 - Content routers that cannot be rerouted


Assume that you have a storage pool with the following three content routers:
■ Content router 1 has 1 TB capacity and is 70% filled. In other words, 700 GB
are used, and 300 GB are free.
■ Content router 2 has 1 TB capacity and is 70% filled. In other words, 700 GB
are used, and 300 GB are free.
■ Content router 3 has 1 TB capacity and is 60% filled. In other words, 600 GB
are used, and 400 GB are free.
298 Storage pool management
Deactivating a service

Assume that you want to remove content router 3 by deactivating it and rerouting
the storage pool. You must reroute and redistribute the 600 GB of data on content
router 3 to content router 1 and content router 2. Together, content router 1 and
content router 2 have 600 GB free. The deactivation appears feasible. However,
this plan is not feasible because the rerouting would fill each content router to
100% capacity. The rerouting process requires that the host that receives the data
has a margin of excess capacity.
A content router always has an internal soft limit and an internal hard limit on
capacity. A content router requires a margin of excess capacity to function. Another
reason for maintaining a margin is that the rerouting process is not always even.
Content router 1 might receive 300 GB of data and reach its limit before content
router 2 received 100 GB of data. The rerouting process would fail even though
content router 2 had excess capacity.
For more information about soft limits and hard limits, see the PureDisk Backup
Operator's Guide.

Deactivating a content router or NetBackup export engine


The following procedure explains how to deactivate a content router or a
NetBackup export engine.
To deactivate a service
1 Click Settings > Topology.
2 Expand the tree in the left pane until you see the content router or NetBackup
export engine that you want to deactivate.
3 Select the service.

Warning: Ensure that sufficient capacity exists on the remaining content


routers in your storage pool before you deactivate your content router and
try to reroute. Failure to ensure that the rerouting can complete might render
your storage pool inoperable.

Before you proceed to the next step, ensure that you properly prepared to
deactivate the content router.
See “Preparing to deactivate a content router” on page 296.
4 In the right pane, click Deactivate Content Router or Deactivate NetBackup
Export Engine.
For a content router, the status changes to Deactivation requested.
Storage pool management 299
Managing license keys

For a NetBackup export engine, the status changes to Inactive. If you


deactivated a NetBackup export engine, proceed to the following step:
■ 8.

5 In the right pane, respond to the question about whether to reroute now or
whether you want to make more changes.
6 Select the storage pool.
7 In the right pane, click Reroute Content Router.
This selection starts the rerouting process, which redistributes data over all
active content routers. The process moves data from the content router in
Deactivation requested status to the content routers that you want to remain
active. Early in the rerouting process, the state of the content router changes
from Deactivation requested to Deactivation pending. At the end of the
rerouting process, PureDisk sets its state to Inactive.
Wait for rerouting to complete successfully before proceeding to the next
step.
For more information about the rerouting process, see the following:
See “Rerouting a content router and managing content routers” on page 288.
8 (Conditional) Take offline the cluster group to which the active service belongs.
Perform this step only if the following are both true:
■ The storage pool is clustered.
■ This service is the only remaining active service on the node.
From the Cluster Manager Java Console, right-click the cluster group, and
select Offline > All Systems.

Managing license keys


After you configure a PureDisk storage pool, you can add or change your PureDisk
license keys. If you configure a central reporting storage pool, you can see the
license keys that are available for all storage pools in your configuration.
For more information about these reports and related central storage pool
management tasks, see the following:
See “Central storage pool authority reports” on page 254.
300 Storage pool management
About central reporting

To add a license key or view license key details


1 Click Settings > Configuration.
2 In the left pane, select License Management.
The right panel displays license key information. Continue with this procedure
if you want to add a license key.
3 In the right pane, click Add Key.
4 In the Key field, type the license key.
5 Click Add.
If you try to add an expired license key, PureDisk generates the following
message:

Key not registered: No such file or directory

Check the expiration date for the key.


To delete a license key
1 Click Settings > Configuration.
2 In the left panel, expand License Management.
3 Select the license key that you want to delete.
4 In the right panel, click Delete Key.
5 To confirm the deletion, click OK on the window that appears.

About central reporting


Users with Central Report rights are allowed to add a storage pool or manage the
list of storage pools. For information about permissions, see the PureDisk Client
Installation Guide.
If you enable central reporting, you can click Settings > Central SPA to retrieve
central reporting information. You can enable central reporting at installation
time, or you can use the following information to implement this feature at a later
date:
■ See “Enabling a storage pool as a central storage pool” on page 301.
■ See “Adding a remote storage pool to a central storage pool” on page 301.
■ See “Disabling central reporting” on page 302.
■ See “Managing storage pools configured in the central storage pool” on page 303.
Storage pool management 301
About central reporting

Enabling a storage pool as a central storage pool


The storage pool configuration wizard lets you designate a storage pool as a central
storage pool. From the central storage pool, you can add other storage pools to
the central storage pool’s reports. These reports contain information about
licensing and capacity.
After installation, you can use the procedure in this section to enable a storage
pool as a central storage pool.
To designate a storage pool as a central storage pool
1 Make sure that the storage pool you want to designate as central is not already
designated as a central storage pool.
Log on to the Web UI, click Settings and visually inspect the Web UI. If Central
SPA appears under Settings, this storage pool is already designated as a
central storage pool. Do not perform the remaining steps of this procedure.
If Central SPA does not appear under Settings, the storage pool is an
independent storage pool or a central storage pool manages it.
If you have a managed storage pool and you want to make it a central storage
pool, you have the following choices:
■ You can disable the managed-to-central reporting relationship that is
currently in effect.
For the current central storage pool and for all of its managed storage
pools, perform the procedure in the following section:
See “Disabling central reporting” on page 302.
■ You can continue with this procedure and make this storage pool a central
storage pool, too. A storage pool can be both a central storage pool and
be a storage pool that is included in another central storage pool’s reports.

2 Log on to the storage pool authority node as root.


3 Type the following command:

# /opt/pdinstall/add_central_reporting.sh

4 Add one or more storage pools to this new central storage pool.
See “Adding a remote storage pool to a central storage pool” on page 301.

Adding a remote storage pool to a central storage pool


The following procedure explains how to add a remote storage pool to a central
storage pool.
302 Storage pool management
About central reporting

To add a storage pool to the central storage pool list


1 Make sure that the storage pool you want to use as a central storage pool is
enabled as a central storage pool.
For information about how to designate a storage pool as a central storage
pool, see the following:
See “Enabling a storage pool as a central storage pool” on page 301.
2 Click Settings > Central SPA.
3 In the left pane, click Storage Pool Management.
4 In the right pane, click Add Remote SPA Entry.
5 Complete the following fields in right pane:
■ Storage pool name. The name of a storage pool that you want to manage.
The storage pool you name here will be managed by the central storage
pool.
■ Host name (FQDN). The FQDN (recommended) or host name of the storage
pool authority service of the storage pool that you want to manage. This
is the address of the other storage pool authority service.
■ Login. The login name of a valid user of the managed storage pool. The
user must have Central Report rights.
■ Password. The password of that valid user of the managed storage pool.

6 Click Add.
When you add a storage pool, PureDisk queries for a list of other storage
pools that are known (through replication) to that storage pool. PureDisk
adds these linked storage pools to the central storage pool list.

Disabling central reporting


Use the following procedure to disable central reporting.
To disable central reporting
1 Log on to the central storage pool authority node as root.
2 Type the following command:

# /opt/pdinstall/del_central_reporting.sh
Storage pool management 303
About central reporting

Managing storage pools configured in the central storage pool


The following topics contain information about management tasks you might
need to perform when you have a central storage pool:
■ See “Editing or deleting a storage pool in the central storage pool list”
on page 303.
■ See “Starting the PureDisk Web UI for another storage pool in another window”
on page 303.
■ See “Testing connections between storage pools” on page 304.

Editing or deleting a storage pool in the central storage pool


list
The following procedures explain how to edit or delete a storage pool.
To edit a storage pool in the central storage pool list
1 Click Settings > Central SPA.
2 In the left pane, select a storage pool.
3 Edit the fields.
4 Click Save.
To delete a storage pool from the central storage pool list
1 Click Settings > Central SPA.
2 In the left pane, select a storage pool.
You cannot delete the central storage pool.
3 In the left pane, click Delete Remote SPA Entry.

Starting the PureDisk Web UI for another storage pool in


another window
The following procedure explains how to start the PureDisk Web UI for another
storage pool. The two storage pools must be in a managed-to-central relationship.
To start the PureDisk Web UI for another storage pool in another window
1 Click Settings > Central SPA.
2 In the left pane, select a storage pool.
3 In the right pane, select Manage Storage Pool.
4 Type the login and password to the other storage pool.
304 Storage pool management
Rerouting a metabase engine

Testing connections between storage pools


PureDisk displays a storage pool that is no longer accessible in the middle pane
with a unique status icon. A storage pool might not be accessible because of
connection problems, because the storage pool was removed, and so on. PureDisk
collects this information when you add a storage pool to the list. PureDisk updates
this information each time a license report is generated or when you click the
Test Connection tab.
You can test the connectivity between all the storage pool authorities in a central
storage pool’s list. The test verifies the link to the storage pool. The test does not
verify any login credentials.
To test the connection to a storage pool in the central storage pool list
1 Click Settings > Central SPA.
2 In the left pane, select a storage pool.
3 In the right pane, click Test Connection.
The Web UI displays a status message for the connection to the storage pool.

Rerouting a metabase engine


You can attach many clients to one metabase engine. However, as the metabase
engine becomes overloaded over time, you might need to add an additional
metabase engine to your storage pool. You can examine the capacity of a metabase
engine, add a new metabase engine, and move agents to the new metabase engine.
The process of moving existing agents to a new metabase engine is called metabase
engine rerouting.
Perform the following procedures to reroute a metabase engine:
■ See “(Optional) Gathering metabase engine capacity information” on page 305.
■ See “Preparing clients for rerouting” on page 305.
■ See “Preparing the old metabase engine for rerouting” on page 307.
■ See “Adding the new metabase engine and recording its address” on page 307.
■ See “Rerouting the agents on the metabase engine” on page 308.
■ See “Restarting the agent” on page 309.
■ See “Verifying a metabase engine rerouting” on page 309.
■ See “Troubleshooting” on page 310.
Storage pool management 305
Rerouting a metabase engine

(Optional) Gathering metabase engine capacity information


A metabase engine capacity report indicates whether a metabase engine is filling
up. You need to obtain this information so you know which metabase engines to
reroute. The following procedure explains how to determine metabase engine
capacity.
To gather metabase engine information
1 Log on to the storage pool authority Web UI.
2 Click Settings > Topology.
3 In the left pane, select the storage pool.
4 In the right pane, click Capacity Dashboard.
5 Examine the information in the lower half of the Capacity Dashboard report
under MetaBase Engine Capacity Report.
If Current Usage is nearing 90%, consider whether to add another metabase
engine. At 90% of capacity, the metabase engine shuts down. After you add
an additional metabase engine, you can reroute the metabase engines.
6 Prepare the clients for rerouting.
See “Preparing clients for rerouting” on page 305.

Preparing clients for rerouting


Before you start the rerouting, you need to choose the clients you want to move,
collect client information, and stop the jobs on the clients you selected. The
following procedure explains how to perform these tasks so you can run the
rerouting program that moves the clients from one metabase engine to another.
To gather client and metabase engine information
1 Decide which agents you want to move to a new metabase engine.
2 Click Manage > Agent.
3 For each agent that you want to move to the new metabase engine, perform
the following steps:
■ In the left pane, select an agent.
■ In the right pane, under More Tasks, click Agent Dashboard.
306 Storage pool management
Rerouting a metabase engine

4 For each agent that you want to move, record the agent ID information from
the Agent Dashboard display.
For example, you can record the information below:

The ID field for the first agent _________________________

The ID field for the second agent _________________________

The ID field for the third agent _________________________

The ID field for the fourth agent _________________________

5 Click Monitor > Jobs.


6 Select an agent that you want to move.
7 Examine the jobs for this agent and terminate any that are still running.
During the rerouting process, make sure that no jobs are running on the
agents you want to move, including the following types of job workflows:
■ Files and Folders Backup
■ Full System Backup
■ MS Exchange Backup
■ MS SQL Backup
■ System State and Services Backup
■ UNC Path Backup
■ Data Removal
■ Export To NetBackup
If any jobs are running that use data selections that reside on the agents you
want to move, click Stop job gracefully to end them.
8 Click Manage > Agent.
9 In the left pane, select the agent that you want to move.
10 In the right pane, under More Tasks, click Deactivate Agent.
11 Repeat step 5 through step 10 for each agent that you want to move.
12 Prepare the old metabase engine for rerouting.
See “Preparing the old metabase engine for rerouting” on page 307.
Storage pool management 307
Rerouting a metabase engine

Preparing the old metabase engine for rerouting


The old metabase engine is the metabase engine from which you want to remove
clients. During the rerouting process, no jobs can run on the old metabase engine.
The following procedure explains how to find running jobs and terminate them.
To terminate running jobs on the old metabase engine
1 Click Monitor > Jobs.
2 In the left pane, use the View jobs by pull-down menu to select Topology.
This action lets you select the metabase engine and display its jobs.
3 In the left pane, expand the tree and select the old metabase engine.
4 In the right pane, examine the running jobs and terminate any that are still
running.
During the rerouting process, no jobs can run on the old metabase engine.
These jobs include the following types of workflows:
■ Disaster Recovery Backup
■ Data Mining
■ Server DB Maintenance
■ Replication
■ SPA Replication
If any jobs are running, click Stop job gracefully to end them.
5 Proceed to the following section:
See “Adding the new metabase engine and recording its address” on page 307.

Adding the new metabase engine and recording its address


The following procedure explains how to add a new metabase engine service. You
can add the new service to either an existing node or to a new node.
To add a new metabase engine and record its address
1 Add a new metabase engine to the storage pool.
See “About adding services” on page 280.
2 Click Settings > Topology.
3 Expand the tree in the left pane until it displays all the storage pool services.
4 Select the new metabase engine server agent.
308 Storage pool management
Rerouting a metabase engine

5 In the right pane, note the Agent Address field, and record the FQDN of the
new metabase engine.
Metabase engine node’s identification __________________________
6 Reroute the agents on the metabase engine.
See “Rerouting the agents on the metabase engine” on page 308.

Rerouting the agents on the metabase engine


The following procedure explains how to run the script to move the agents from
the old metabase engine to the new metabase engine.
To reroute the metabase engine
1 Log on to the storage pool authority as root.
2 Type the following command to change to the PureDisk commands directory:

# cd /opt/pdspa/cli

3 Type the following command to start the rerouting:

# /opt/pdag/bin/php MBERerouting.php agent_id new_mbe_id

This command accepts the following arguments:

agent_id Specify the agent ID of one of the agents you want to move.

This command accepts one agent_id only. If you have to move


more than one agent to a new metabase engine, type a separate
MBERerouting.php command for each agent.

new_mbe_id Specify the node identification for the new metabase engine. This
value is the FQDN, host name, or IP address as it appears in the
administrative Web UI.

The rerouting script fails if you specify a host name and the
identifier in the Web UI is an IP address (or vice versa).

4 Answer the prompts from the script.


For example, the script prompts you to confirm that you want to continue
with the rerouting process.
5 Observe the completion messages.
Make sure the rerouting completes before you start any new jobs for the
agents you moved.
Storage pool management 309
Rerouting a metabase engine

As part of its work, the script performs the following actions:


■ Activates the agents that now reside on the new metabase engine.
■ Updates the agent configuration files.
■ Copies the data selections from the original metabase engine to the new
metabase engine.

6 Restart the agent


Information about how to restart the agent is available.
See “Restarting the agent” on page 309.

Restarting the agent


Use one of the following procedures to restart the agent service on a client that
you moved to a new metabase engine.
To restart the agent service on a Windows client
1 Log on to the client system as an administrator.
2 Click Start > Settings > Control Panel > Administrative Tools > Services.
3 In the Services window, locate and select the Veritas NetBackup PureDisk
Client Agent service.

4 Click Restart to restart the service.


To restart the agent service on a Linux or UNIX client
1 Log on to the client system as root.
2 Type the following command:

# /etc/init.d/pdagent restart

Verifying a metabase engine rerouting


Complete the following procedure to verify that agents are attached to the new
metabase engine.
To verify rerouting
1 Click Manage > Agent.
2 In the left pane, select an agent.
310 Storage pool management
About clustered storage pool administration

3 In the right pane, look for Metabase Engine: and make sure that the agent is
attached to the new metabase engine.
4 Test the new configuration by running a manual backup from this agent.

Troubleshooting
You might need to abort a metabase engine rerouting job or a metabase engine
rerouting job might fail. The following procedure returns a storage pool to the
state it was in before you started a metabase engine rerouting job.
To troubleshoot a failed metabase rerouting job
1 Log on to the storage pool authority as root.
2 Type the following command to change to the PureDisk commands directory:

# cd /opt/pdspa/cli

3 Type the following command to start the rerouting:

# /opt/pdag/bin/php MBEHeal.php

About clustered storage pool administration


If you installed a clustered storage pool, you can administer the storage pool from
the Veritas cluster manager Java console. For example, you can use this console
to initiate failovers.
Some reservations apply to failovers. Most notably, active PureDisk jobs might
fail during a failover. Symantec suggests that you verify ongoing storage pool
activity in the PureDisk Web UI before you initiate a failover and after an
unexpected failover.
For more information about cluster administration, see the Veritas Cluster Server
User’s Guide.

Changing the PDOS administrator’s password


The following procedure explains how to change the administrator (root) user’s
password on a PureDisk node. This procedure applies for multinode and all-in-one
storage pools.
Storage pool management 311
Changing the PureDisk internal database and the LDAP administrator passwords

To change the PDOS administrator’s password


1 Log on to the node as root.
If the storage pool is clustered, log on to the physical computer hardware.
2 Type the following command to reset the password in the operating system:

# passwd

When the command issues prompts, type the old and new passwords.

Changing the PureDisk internal database and the


LDAP administrator passwords
Depending on your site practices, or your security status, you might need to
regenerate passwords for the PureDisk internal database or for the LDAP
administrator. These passwords are internal. At configuration time, the storage
pool configuration wizards sets them to a random, internal value.
The following procedure explains how to change the following passwords:
■ PureDisk database password
■ LDAP administrator password
To change a PureDisk database and authentication password
1 In a browser window, type the following to start the storage pool configuration
wizard:

http://URL/Installer

For URL, type the FQDN of the node that hosts the storage pool authority
service.
2 Click Next until you arrive at the Regenerate Passwords page.
3 Click Regenerate Passwords.
4 Wait for the process to complete.
5 Click Cancel.

Increasing the number of client connections


By default, PureDisk permits 300 client connections to one content router. If the
content router has enough available physical memory, you might be able to
312 Storage pool management
Adjusting the clock on a PureDisk node

increase the number of client connections. Each content router requires a certain
amount of memory per client, and this calculation is as follows:
(2 X segment_size) + 512KB
512 KB is the stack size for the client thread.
The following procedure explains how to increase the number of clients.
To increase the number of clients
1 Click Settings > Configuration.
2 In the left pane, expand Configuration File Templates > PureDisk
ContentRouter > Default ValueSet for PureDisk ContentRouter >
ContentRouter > MaxConnections > All OS:number.
3 Select All OS:number.
4 In the right pane, in the Value field, increase the present number to the
number of clients + 5.
Five slots are reserved. The maximum value you can specify is 8192.
5 Click Save.
6 In the left pane, select TaskThreadStackSize.
7 In the right pane, select Add Configuration File Value.
8 On the Properties: Configuration File Value screen, change the Value field
to 256 and click Add.
This value is the stack size for client threads.
9 Restart the content router.
For information about how to restart the content router or other processes
see the following:
■ See “Stopping and starting processes on one PureDisk node (unclustered)”
on page 314.
■ See “Stopping and starting processes on one PureDisk node (clustered)”
on page 317.
■ See “Stopping and starting processes in a multinode PureDisk storage
pool” on page 318.

Adjusting the clock on a PureDisk node


PureDisk requires synchronization of the time setting between the different nodes
in a storage pool. The synchronization relies upon an NTP server, which in normal
Storage pool management 313
Adjusting the clock on a PureDisk node

operation guarantees that all nodes have the time setting. However, the time on
a PureDisk node can become incorrect in exceptional cases, such as when the NTP
server fails or the connection between the storage pool and the NTP server fails.
If you notice an incorrect time setting on a PureDisk node, use the following
procedure to adjust the clock in a safe way.
To adjust the clock on a PureDisk node if the time difference is less than one day
1 Stop all PureDisk services on the node.
Typically, this action causes running jobs to fail. For information about how
to stop and start processes, see the following:
■ See “Stopping and starting processes on one PureDisk node (unclustered)”
on page 314.
■ See “Stopping and starting processes on one PureDisk node (clustered)”
on page 317.
■ See “Stopping and starting processes in a multinode PureDisk storage
pool” on page 318.

2 (Conditional) Make sure that the NTP server works properly and can be
reached from the node.
Perform this step if the node you want to fix hosts the storage pool authority
service.
3 Adjust the time on the node.
4 Start the PureDisk processes on the node.
For information about how to stop and start processes, see the following:
■ See “Stopping and starting processes on one PureDisk node (unclustered)”
on page 314.
■ See “Stopping and starting processes on one PureDisk node (clustered)”
on page 317.
■ See “Stopping and starting processes in a multinode PureDisk storage
pool” on page 318.
To adjust the clock on a PureDisk node if the time difference is more than one day
◆ Contact Symantec technical support.
314 Storage pool management
Adjusting the Web UI time-out interval

Adjusting the Web UI time-out interval


After a period of inactivity, the PureDisk administrator Web UI logs a user out.
By default, PureDisk logs a user out after 30 minutes of inactivity. The following
procedure explains how to set the interval to a different value.
To adjust the time-out interval
1 Log into the storage pool authority node as root.
2 Open file /opt/pdgui/tomcat/webapps/PureDisk/WEB-INF/web.xml.
3 Search for the following string:

<session-timeout>30</session-timeout>

4 Change 30 to a different value.


This value is expressed in minutes.
5 Save file /opt/pdgui/tomcat/webapps/PureDisk/WEB-INF/web.xml.

Stopping and starting processes on one PureDisk


node (unclustered)
You can stop or start individual services or all services.
Use caution when you start or stop individual services. Symantec recommends
that when you stop or start services, specify an action, but do not specify a list of
services. In this case, PureDisk performs the action on all services, and all services
start and stop in the correct order.
Symantec recommends that you start or stop individual services only when the
PureDisk documentation directs you or when a Symantec technical support directs
you.

Stopping all services


The following procedure explains how to stop all services.
Storage pool management 315
Stopping and starting processes on one PureDisk node (unclustered)

To stop all services


◆ Type the following command to stop all services:

# /etc/init.d/puredisk stop

For example, assume that you had to abort the installation of a PureDisk
environment. Before trying to install the software again, you need to stop all
the services on the host. You can enter the preceding command to stop all
services correctly before you try to reinstall the environment.

Starting all services


The following procedure explains how to start all services.
To start all services
◆ Type the following command to start all services:

# /etc/init.d/puredisk start

Starting all services without rebooting


If you reboot a PureDisk node, the system restarts all PureDisk services on the
node. The following procedure explains how to stop and restart PureDisk services
manually.
To restart PureDisk processes
◆ Type the following command to restart all PureDisk processes:

# /etc/init.d/puredisk restart

Stopping and starting individual services


The following procedure explains how to stop or start individual services.
To stop or start individual services
◆ Use the following command to stop or start individual services:

/etc/init.d/puredisk action [service] [service] ...

The arguments are as follows:


316 Storage pool management
Stopping and starting processes on one PureDisk node (unclustered)

action Specifies the operation to be performed on the service. Specify


one of the following:

■ start. Starts the specified services.


■ stop. Stops the specified services.
■ restart. Stops and then starts specified services. The system
restarts the services, but it does not enumerate in its messages
which services it restarts.
■ reload. Reloads the configuration files for the specified
services and starts them if they are not already started.
■ status. Displays the status of the specified services.

service Optional. Use caution when you specify individual service


specifications.

If you do not specify any services, the command affects all


services. This syntax is the preferred way to use this command.
If you want to specify individual services, specify them in the
order that they appear in the following lists.

If you want to stop more than one service, enter a space character
between each service. Stop them in the following order:

■ pdworkflowd. The PureDisk workflow daemon.


■ pdcr. The PureDisk content router.
■ pdagent. The PureDisk server agent.
■ pdctrl. The PureDisk controller.
■ pdweb. The PureDisk web server.
■ pddb. The PureDisk database service.
■ pdmemcached. The memcache daemon.

If you want to start more than one service, enter a space character
between each service. Start them in the following order:

■ pdmemcached. The memcache daemon.


■ pddb. The PureDisk database service.
■ pdweb. The PureDisk web server.
■ pdctrl. The PureDisk controller.
■ pdagent. The PureDisk server agent.
■ pdcr. The PureDisk content router.
■ pdworkflowd. The PureDisk workflow daemon.

For example:

# /etc/init.d/puredisk start pdcr pdagent


Storage pool management 317
Stopping and starting processes on one PureDisk node (clustered)

Stopping and starting processes on one PureDisk


node (clustered)
The procedure to stop or start processes on one PureDisk node in a clustered
storage pool is similar to the procedure for a node in an unclustered storage pool.
The difference is that you need to freeze the node while the node’s processes are
stopped. That is, if you stop one or more PureDisk services, the entire resource
group must remain frozen until you start the services again.
For example, assume that you want to stop PureDisk processes on node1 in your
storage pool. The services on node1 appear in the Cluster Manager Java Console
as pd_group1, and you want to restart one or more services on that node.
To stop and start services on example node 1, which is known to VCS as pd_group1
1 In the Cluster Manager Java Console, right-click pd_group1 and select Freeze
Temporary.
This action prevents pd_group1 from failing over when VCS detects that some
services are down.
2 Use a PureDisk procedure to stop or restart the services on that node.
For more information, see:
See “Stopping and starting processes on one PureDisk node (unclustered)”
on page 314.
3 Visually inspect the Cluster Manager Java Console Web UI and check for fault
conditions.
Depending on how long ago the PureDisk service went down, VCS might have
detected that the service is down. In this case, the resource might appear as
faulted in the PureDisk Web UI. If this is the case, complete the following
steps:
■ Right-click that resource and select Clear Fault - Auto.
■ After you clear the fault, the resource appears as Offline. Although it
starts again, specify to VCS that you want it to monitor the resource. To
enable monitoring again, right-click the resource and select probe - node1.
This assumes that node1’s other services are online currently.

4 Visually inspect the display and make sure that all resources of pd_group1
now appear with a status of Online.
5 Right-click the resource group (pd_group1) and select Unfreeze.
This action unfreezes the node.
318 Storage pool management
Stopping and starting processes in a multinode PureDisk storage pool

Stopping and starting processes in a multinode


PureDisk storage pool
You can stop and start processes in a multinode PureDisk storage pool. This
procedure is the same for clustered and unclustered storage pools with the
exception of the steps to freeze and unfreeze the cluster.
To stop services in a multinode PureDisk storage pool
1 (Conditional) Freeze the cluster.
Perform this step if the storage pool is clustered.
This action prevents VCS from failing over the node. To freeze the cluster,
use the Cluster Manager Java console, and freeze all the service groups.
2 Stop the services on all the non-storage pool authority nodes first.
The order in which you stop these nodes does not matter, but do not stop the
storage pool authority node at this time.
See “Stopping and starting processes on one PureDisk node (unclustered)”
on page 314.
3 Stop the services on the storage pool authority node.
See “Stopping and starting processes on one PureDisk node (unclustered)”
on page 314.
4 Perform the maintenance operations that you need to perform.
5 Restart services.
See “To start services in a multinode PureDisk storage pool” on page 318.
To start services in a multinode PureDisk storage pool
1 Start the services on the storage pool authority node.
See “Starting all services without rebooting” on page 315.
2 Start the services on all the non-storage pool authority nodes.
See “Starting all services without rebooting” on page 315.
If the storage pool is not clustered, stop here. Do not perform the rest of the
steps in this procedure.
If the storage pool is clustered, complete the rest of the steps in this procedure.
3 Visually inspect the Cluster Manager Java Console Web UI and check for fault
conditions.
Depending on how long ago the PureDisk service went down, VCS might have
detected that the service is down. In this case, the resource might appear as
Storage pool management 319
Restarting the Java run-time environment

faulted in the PureDisk Web UI. If this is the case, complete the following
steps:
■ Right-click that resource and select Clear Fault - Auto.
■ After you clear the fault, the resource appears as Offline. Although it
starts again, specify to VCS that you want it to monitor the resource. To
enable monitoring again, right-click the resource and select probe - node.
This assumes that node’s other services are online currently.

4 Visually inspect the display and make sure that all resources of resource_group
now appear with a status of Online.
5 Right-click a resource_group and select Unfreeze.
This action unfreezes the resource group. Perform this step for all resource
groups.

Restarting the Java run-time environment


If you cannot log onto the PureDisk administrator Web UI, one cause might be
that the Java run-time environment (JRE) has failed. The following procedure
explains how to restart the JRE.
To restart the JRE
1 Log onto the node that hosts the storage pool authority as root.
2 Type the following command:

# /etc/init.d/puredisk status
320 Storage pool management
Restarting the Java run-time environment

3 Examine the output.


Example output is as follows:

Checking for LDAP-server running


Checking for VxATd daemon running
Checking for PureDisk Memory Cache Daemon running
Checking for PureDisk Database Server running
Checking for PureDisk WebServer running
Checking for PureDisk Controller running
Checking for PureDisk Controller Monitor running
Checking for PureDisk Server Agent running
Checking for PureDisk ContentRouter running
Checking for PureDisk Workflow Engine running
Checking for PureDisk MetabaseEngine unused
Checking for PureDisk JAVA GUI unused
Checking for CRON daemon running

The preceding output indicates that a metabase engine is also installed on


this node. The status for both the metabase engine and the JAVA GUI are set
to unused. This output indicates that the JRE has failed.
4 Type the following command to restart PureDisk:

# /etc/init.d/puredisk start
Chapter 12
Reconfiguring your
PureDisk environment
This chapter includes the following topics:

■ About the configuration files

■ Examining configuration settings

■ Editing the configuration files with the Web UI

■ Editing the configuration files with a text editor

■ Updating the agent configuration files on a client

About the configuration files


The PureDisk configuration file templates that you see under Settings >
Configuration let you change some of the default values that PureDisk uses. You
can edit and change the configuration fields at any time. However, before you
push your changes to the storage pool, make sure that the system is quiet. Make
sure no backups are running because when PureDisk propagates the changes, it
restarts the agents.
For information about configuration, see the following:
■ See “Examining configuration settings” on page 322.
■ See “Editing the configuration files with the Web UI” on page 322.
■ See “Editing the configuration files with a text editor” on page 326.
■ See “Editing an agent configuration file to improve backup and restore
performance” on page 332.
322 Reconfiguring your PureDisk environment
Examining configuration settings

■ See “Editing an agent configuration file to accommodate large backups”


on page 333.
■ See “Updating the agent configuration files on a client” on page 327.
■ For parameters that you can use to tune PureDisk, see the PureDisk Best
Practices Guide.

Examining configuration settings


The configuration file is divided into several sections and subsections. To see the
file templates presented graphically, use the Web UI and examine the entries
under Settings > Configuration > Configuration File Templates.
The following procedure explains how to examine configuration file sections.
To examine the configuration file sections
1 Click Settings > Configuration.
2 In the left pane, click the plus sign (+) to the left of Configuration file
templates.
3 Expand one of the PureDisk templates.
For example, expand PureDisk Storage Pool Authority > Default ValueSet
for PureDisk Storage Pool Authority > watchdog > interval.
4 Click All OS.
Examine the information in the Value field.

Editing the configuration files with the Web UI


You can edit the configuration values. Typically, you edit the configuration file
values only when directed to do so by a PureDisk procedure.
To edit your configuration
1 Make a copy of the default value set you want to change.
See “Making a copy of a value set” on page 323.
2 Navigate to a configuration value in the copy of the value set.
See “Navigating to a value in the configuration file copy” on page 323.
Reconfiguring your PureDisk environment 323
Editing the configuration files with the Web UI

3 Changing a configuration value or deleting a configuration value.


See “Changing a configuration file value or deleting a configuration file value”
on page 324.
4 Assign the new template and, optionally, push the configuration file changes.
You can assign the template and wait to push the changes, or you can assign
the template and push the changes in one operation.
See “Assigning the template and, optionally, pushing the configuration file
changes” on page 325.

Making a copy of a value set


Use the following procedure to copy a value set.
To make a copy of a value set
1 Click Settings > Configuration.
2 In the left pane, expand Configuration File Templates.
3 Expand an agent or service.
For example, expand PureDisk Client Agent.
4 Select the default value set.
For example, select Default ValueSet for PureDisk Client Agent.
5 In the right pane, click Copy ValueSet.
You can make only one copy of the default value set.

6 In the left pane, select the copy of the value set.


7 (Optional) In the right pane, in the Value set name (new) field, type a new
name for the copy of the value set.
Alternatively, you can decide to accept the default name.
8 Click Save.
9 Visually inspect the left panel to check that the copy appears.
10 Proceed to the following:
See “Navigating to a value in the configuration file copy” on page 323.

Navigating to a value in the configuration file copy


Use the following procedure to navigate to a configuration file value in the
configuration file copy.
324 Reconfiguring your PureDisk environment
Editing the configuration files with the Web UI

Note: Do not edit a line if its default value contains brace characters. For example
All OS:{{$agentid}}.These are system variables.

To navigate to the content of a configuration file field


1 (Conditional) Click Settings > Configuration.
Perform this step if the copy of the default value set does not already appear
in the left pane.
2 (Conditional) In the left pane, click the plus sign (+) to the left of Configuration
file templates.
Perform this step if the copy of the default value set does not already appear
in the left pane.
3 (Conditional) Expand the configuration file template so that you can see the
copy of the default value set.
Perform this step if the copy of the default value set does not already appear
in the left pane.
4 Select the copy of the default value set you want to change.
5 Proceed to the following:
See “Changing a configuration file value or deleting a configuration file value”
on page 324.

Changing a configuration file value or deleting a configuration file


value
By default, some configuration file values are unspecified. Other configuration
values might have values that you want to change. You can change values, and
you can also delete values.
After you delete a value, you cannot add it back. For example, if you want to delete
PureDisk Client Agent > Copy of Default ValueSet for PureDisk Client Agent >
mail > smtpserver2 > All OS:, you cannot add it back later if your site gets an
additional SMTP server. For this reason, the PureDisk Web UI asks you to confirm
your action when you click Delete Configuration File Value.
Reconfiguring your PureDisk environment 325
Editing the configuration files with the Web UI

To change or delete a configuration file value


1 In the left pane, expand the value set and select a line.
For example, expand PureDisk Client Agent > Copy of Default ValueSet for
PureDisk Client Agent > progress > showicon > All OS:1.
2 In the right pane, edit the Value field or click Delete Configuration File Value.
If you clicked Delete Configuration File Value, the Web UI asks you to confirm
your action. After you delete a value, you cannot add it back.
For example, by default, the showicon value is set to 1, which is enabled. It
shows the icon on the client while PureDisk does backups. To disable the icon,
change the value to 0.
3 Click Save.
4 Proceed to the following:
See “Assigning the template and, optionally, pushing the configuration file
changes” on page 325.

Assigning the template and, optionally, pushing the configuration file


changes
After you edit the configuration, you can perform operations to test the new
configuration. After the tests finish, however, you need to push the configuration
file changes you made to the PureDisk system services in order for the changes
to become permanent. If you do not push your configuration file changes, PureDisk
the values revert to their previous settings the next time you push the
configuration file changes.
The following procedure explains how to propagate configuration file changes to
the storage pool.
To update PureDisk services
1 (Conditional) Click Settings > Configuration.
Perform this step if the copy of the value sets does not already appear in the
left pane.
2 In the left pane, click the plus sign (+) to the left of Configuration File
Templates.
Perform this step if the copy of the value set does not already appear in the
left pane.
326 Reconfiguring your PureDisk environment
Editing the configuration files with a text editor

3 (Conditional) Expand the tree in the left pane and select value set copy.
Perform this step if the copy of the value set does not already appear in the
left pane.
For example, select PureDisk Client Agent > Copy of Default ValueSet for
PureDisk Client Agent.
4 In the right pane, click Assign template.
5 Select the entities that you want to use this value set.
6 Click Assign.
7 (Conditional) Click Push Configuration Files.
Perform this step if you want the changes you made to become permanent.
The list of members should include the services you selected in 5.
PureDisk monitors the configuration files that are pushed to each agent and
checks if the value set has changed. PureDisk performs this check and creates
update jobs only if the value set has changed since the last update job ran for
each agent. For example, if you push a value set to an agent twice without
changing the value set, PureDisk creates only one job.
If you use the Force option, the server-side change checking is ignored and
an update job is always created.
8 (Conditional) Click Push.
Perform this step if you also performed step 7.
Confirm your actions in the dialog boxes that appear.

Editing the configuration files with a text editor


The configuration files are ASCII files. You can edit these files with any text editor.
The files themselves contain descriptions for each field. You can also use the Web
UI to determine sections and field content for each configuration file.

Note: If you edit these files with a text editor, you cannot push them to the storage
pool. Also, any subsequent changes that you make with the Web UI overwrite the
manual changes you made with a text editor. Symantec recommends that you use
this method only if instructed to do so by a Symantec technical support
representative.

Table 12-1 shows the locations of these ASCII files.


Reconfiguring your PureDisk environment 327
Updating the agent configuration files on a client

Table 12-1 Configuration file locations

Configuration file Location

Storage pool authority /etc/puredisk/spa.cfg

Content router /etc/puredisk/contentrouter.cfg

Metabase server /etc/puredisk/pdmbs.cfg

Metabase engine /etc/puredisk/pdmbe.cfg

Controller /etc/puredisk/pdctrl.cfg

PureDisk node agent /etc/puredisk/agent.cfg

Updating the agent configuration files on a client


Regardless of how you installed and registered an agent the first time, you can
change the agent configuration on a client, as follows:
To retrieve a new configuration file, see the following procedure:
See “To retrieve a new configuration file for a client” on page 328.
To change the data lock password, see the following procedure:
See “To reset the data lock password on a client” on page 329.
For information about the parameters and arguments that the pdregister
command uses, see the Windows and UNIX installation chapters of the PureDisk
Client Installation Guide.
The procedure To retrieve a new configuration file for a client explains how to
update an agent configuration file in the following situations:
■ Example 1.
A client system crashes or has been reimaged. In this case, you can reregister
the client on the storage pool authority. The original configuration file is
completely lost.
Assume the following series of events:
■ A user installs the PureDisk agent software on a client.
■ The client system crashes.
■ The user reinstalls the operating system software on the client.
■ The user can run the pdregister command as shown in this procedure to
restore the configuration file for this particular client.
328 Reconfiguring your PureDisk environment
Updating the agent configuration files on a client

■ Example 2.
You accidentally make erroneous edits to a configuration file.
You might change configuration file parameters and later want to revert to
the original configuration file. If the old configuration file exists and has a
valid agent ID, you can obtain a new copy from the storage pool authority.
■ Example 3 (Linux only).
If you have SPAR enabled and you need to retrieve a new configuration file.
SPAR enables you to replicate storage pool information from a remote storage
pool to a main storage pool. The remote storage pool acts as a client to the
main storage pool.
For more information about how to use SPAR, see the PureDisk Administrator’s
Guide.
In the following procedure, the format for the pdregister command is shown
generically for MS Windows or UNIX clients. The .exe suffix applies only to
Windows clients.
To retrieve a new configuration file for a client
1 Invoke the PureDisk Web UI and make sure that the client appears in the list
of clients in the left pane when you select Manage > Agent.
Do not perform this procedure if the client is not registered on the storage
pool currently.
2 Log on to the client system as root (Linux or UNIX platforms) or as the
administrator (Windows platforms).
3 Change to the directory into which you installed the agent software.
On Linux and UNIX platforms, change to install_path/pdag/bin. The default
is /opt/pdag/bin.
On Windows platforms, change to the directory into which you installed the
agent. By default, this directory is C:\Program Files\Symantec\NetBackup
PureDisk Agent\bin.

4 Run the pdregister command.


To retrieve a new agent configuration file from the storage pool authority,
type the command with the following parameters:

pdregister[.exe] --action=configfile -–type=Agent --login=login --passwd=pwd \


--url=https://yourspa/spa/ws --todisk --agentid=id [--productname=SPARR]

This set of parameters assumes that the original configuration file is


completely lost. To ensure that PureDisk recognizes this client, you need to
specify the agent ID.
Reconfiguring your PureDisk environment 329
Updating the agent configuration files on a client

(Linux only) Specify --productname=SPARR if SPAR is enabled, and this host


is a client to a main storage pool. Make sure that the logon and password
belong to a user that has Agent Management permissions.
To retrieve a new agent configuration file from the storage pool authority
and overwrite an existing configuration file, type the command with the
following parameters:

pdregister[.exe] --action=configfile -–type=Agent --url=https://yourspa/spa/ws


--todisk [--productname=SPARR]

This set of parameters assumes that the original configuration file still resides
on the client. Your intent is to restore it to the form it has on the storage pool
authority. You do not need to specify the agent ID or the logon credentials.
More information about the parameters and arguments that pdregister
accepts is available.
See the PureDisk Client Installation Guide
5 (Conditional) Activate the agent.
Perform this step if the agent is not activated.
More information about how to activate the agent is available.
See the PureDisk Client Installation Guide
To reset the data lock password on a client
1 Invoke the PureDisk Web UI and make sure that the client appears in the list
of clients when you click Manage > Agent.
Do not perform this procedure if the client is not registered on the storage
pool currently.
2 Log on to the client system as root for UNIX clients and as admin for MS
Windows clients.
330 Reconfiguring your PureDisk environment
Updating the agent configuration files on a client

3 Change to the directory into which you installed the agent software.
On Linux and UNIX platforms, change to install_path/pdag/bin. The default
is /opt/pdag/bin.
On Windows platforms, change to the directory into which you installed the
agent. By default, this directory is C:\Program Files\Symantec\NetBackup
PureDisk Agent\bin.

4 Type the following command to reset the data lock password:

pdregister[.exe] --action=replacedatalockpassword --type=Agent --login=login \


--passwd=pwd --url=https://yourspa/spa/ws --olddatalockpasswd="old_pwd" \
--datalockpasswd="pwd"

Make sure that the logon and password belong to a user that has Agent
Management permissions.
Chapter 13
Tuning and optimization
This chapter includes the following topics:

■ Tuning backup and restore performance

■ Tuning replication performance

Tuning backup and restore performance


You can improve PureDisk backup and restore performance. In some cases, you
can use multistreamed backups, multistreamed restores, and segmentation options
for backup jobs. PureDisk lets you optimize the communication between the client
and the content router, and this optimization can increase backup and restore
performance.
The following information explains how to implement these optimizations:
■ See “Editing an agent configuration file to improve backup and restore
performance” on page 332.
■ See “Editing an agent configuration file to accommodate large backups”
on page 333.
■ See “Multistreamed (parallel) backups” on page 334.
■ See “Multistreamed (parallel) restores” on page 335.
■ See “Segmentation options for backup jobs” on page 336.
Any changes you make to these the configuration files are considered to be an
advanced operation, and you need to measure your results after each change.
These changes require testing and experimentation. When you measure
performance, measure the time for the putfiles job step, not total job time.
If your backups and restore jobs seem slow, try increasing the number of streams
and checking the resulting performance. In some cases, performance can be better
if you use fewer streams.
332 Tuning and optimization
Tuning backup and restore performance

Editing an agent configuration file to improve backup and restore


performance
The default agent settings in the configuration files do not enable multistreaming
of backups or restores. By default, there is no segmentation threshold value and
no list of segmentation file types.
You can change the default configuration file settings. Symantec recommends
that you set segmentation parameters in the Web UI and only vary the agent
configuration parameters for special client situations. For example, you might
have some clients on slow network lines. To improve performance, you can use
multistreaming or the segmentation options.
For information about changing configuration files, see the following:
See “About the configuration files” on page 321.
To tune backup performance
◆ To tune backup performance, you can edit the following settings under
Settings > Configuration > Configuration File Templates > PureDisk Client
Agent > Default ValueSet for PureDisk Client Agent > backup:
■ DontSegmentTypes.
The default value is an empty list, All OS:.
■ DontSegmentThreshold.
The default value is All OS:0.
■ maxstreams.
The default value is All OS:1.
■ MaxRetryCount
Specifies the maximum number of times that the backup job can attempt
to send each file in the event of network errors. Specify any positive
integer. The default is 5. The backup job stops and issues an error the first
time it encounters a file that cannot be transmitted and the retry count
has been exceeded.
For information about the segmentation parameters, see the following:
■ The PureDisk Backup Operator’s Guide.
■ See “Segmentation options for backup jobs” on page 336.

To tune restore performance


◆ To tune restore performance, you can edit the following settings under
Settings > Configuration > Configuration File Templates > PureDisk Client
Agent > Default ValueSet for PureDisk Client Agent > restore:
Tuning and optimization 333
Tuning backup and restore performance

■ maxstreams.
The default value is All OS:1.
■ MaxSegmentPrefetchSize. Specifies the number of bytes to prefetch
during a restore. The default is 16MiB. Valid units are B, KiB (1024), MiB,
GiB, KB (1000), MB, and GB.
When set to zero (0), PureDisk disables prefetching. During restore,
PureDisk fetches only one data segment at a time. This behavior is identical
to the default restore behavior before PureDisk 6.6.
If you restore to a client that is very low on memory, and you want to
ensure that memory use is low during the restore job, you can set this
value to zero (0) or to a value that is less than zero.
■ SegmentChunkSize. Specifies the number of bytes of data to transfer
over the network from the server to the client at one time. The default is
32KiB. Valid units are B, KiB (1024), MiB, KB (1000), and MB.
The range for this setting is from 1KiB through 16 MiB.
This setting has no effect if MaxSegmentPrefetchSize is set to zero (0).

Editing an agent configuration file to accommodate large backups


During a backup, the agent gathers a list of the files to be backed up. On Linux
and UNIX clients, the agent writes these lists to the following directories:
■ /opt/pdag/var

■ /opt/pdag/tmp

During very large backups, these lists can grow beyond the space that you allocated
to the / partition, which is typically kept relatively small. If you expect this space
problem might happen on a client, use the following procedure to modify the
agent configuration file on that system.

Note: Repeat this procedure each time you update the agent configuration files
through the Web UI. The repetition is necessary because updates through the
Web UI overwrite all agent configuration files.
334 Tuning and optimization
Tuning backup and restore performance

To modify the agent configuration to accommodate large backups


1 Type the following command to stop the agent service:

# /opt/pdag/bin/pdagent --stop

2 Open the agent configuration file in a text editor.


For example:

# vi /etc/puredisk/agent.cfg

3 In the [paths] section, type new paths for the following parameters:

var
temp

The new paths must be full paths, not relative paths. They must refer to a
partition large enough for the backups.
4 Save and close the file.
5 Type the following command to start the agent service:

# /opt/pdag/bin/pdagent

Multistreamed (parallel) backups


A backup job can have multiple threads. When the job runs, PureDisk might not
always use the maximum number of backup streams that you specify. When a job
starts, PureDisk determines the total amount of data to upload. Based on the
length of the data entries list and the number of streams requested, PureDisk
determines and initializes the optimal number of threads.
After PureDisk validates the input data list, it uses the following rules to properly
distribute data entries across the requested number of data streams:
■ If the number of data entries is less than the number of requested streams, a
single stream backup is used.
■ If the number of data entries is less than 4096 per stream, they are balanced
across the streams.
■ If the number of data entries are more than 4096 per stream, each stream is
given 4096 data entries. As each stream finishes, it is given the next 4096 data
entries until no entries remain.
For multistreamed backup jobs, a distinction is made between transient and fatal
error conditions. An example of a transient error condition is a temporary loss
Tuning and optimization 335
Tuning backup and restore performance

of connection with remote storage. Examples of fatal error conditions are an


empty routing table, an invalid data selection ID, or an out-of-memory error.
If you use multiple backup streams, the behavior of the Sorting option is slightly
changed. This occurs, for example, if you set Number of backup streams in the
Parameters tab of a backup policy to 2 or more.
If you enable both sorting and multistreaming, the PureDisk agent first sorts the
complete list of files by their size. It then distributes small chunks of the sorted
list over the available stream. For this reason, and because PureDisk backs up
multiple files at one time, the order in which it backs up the files cannot match
exactly the order of the files in the sorted list.
For large backups, the trend of the backup order is still as expected (smallest files
first), but for small backups of only a few files, there may be some anomalies.
If you run too many parallel backup streams, you can overload the client system.
If you overload the client, one of the following occurs:
■ The backups might fail with a status of SUCCESS_WITH_ERRORS. In this case,
not all files are transferred, and there is no obvious error in the log file.
■ PureDisk might display the following message:

Insufficient system resources exist to complete the requested service

You might need to experiment with more than one approach to performance
tuning, or you might need to use different combinations of streams and segment
size values. The exact values depend on the specific client and its hardware
configuration.
Symantec recommends that you start with a small number of streams and that
you increase max_streams only if the backup performance is unacceptable. At
some point, if you increase max_streams, performance does not improve. A
max_streams value that is too large provides no benefit, can overload the client
system, and can cause backups to fail.

Multistreamed (parallel) restores


A restore job can have multiple threads. When the job runs, PureDisk might not
always use the maximum number of restore streams that you specify. When a job
starts, PureDisk determines the total amount of data to upload. Based on the
length of the data entries list and the number of streams requested, PureDisk
determines and initializes the optimal number of threads.
PureDisk validates the data list. After the validation, if it determines that the
number of data entries is fewer than three per stream, it uses a single stream
restore.
336 Tuning and optimization
Tuning backup and restore performance

Segmentation options for backup jobs


The following sections describe the segmentation options you can use to reduce
the number of segments that are used to transmit backup files to a PureDisk
content router:
■ See “Segmentation threshold value” on page 336.
■ See “File-type segmentation” on page 336.
By editing these segmentation values, you might be able to improve performance.
A difference in backup performance is more noticeable if there is a high latency
on the connection between the agent and the content router(s).

Note: If these two options are not set identically on all agents, the effectiveness
of PureDisk deduplication option (PDDO) can be reduced. If you enable PDDO, the
MATCH_PDRO parameter is enabled by default. When enabled, the MATCH_PDRO
parameter specifies that the PureDisk calculate the segment size based on the file
size, which is the same method by which PureDisk calculates the segment size
for a typical backup.

Segmentation threshold value


A segmentation threshold value can be set in the Web UI or in the agent
configuration file.
When a PureDisk agent transfers a file to a content router, it checks whether the
file is smaller than the segmentation threshold value. If the file is smaller, the file
is transmitted in only one segment. If the file is larger than the maximum segment
size, it is transmitted in multiple segments. However, each segment is the
maximum segment size, except that the last might be smaller.
The segmentation threshold value cannot be set to a value greater than the
maximum segment size, or smaller than twice the default segment size.

File-type segmentation
A comma-separated list of file types (identified by file name suffixes) can be set
in the UI or in the agent configuration file.
As a PureDisk agent transfers a file to a content router, it checks whether the
file’s suffix is contained in this list. If the suffix is in the list, the file is transmitted
in only one segment. If the file is larger than the maximum segment size, it is still
transmitted in multiple segments. However, each segment is the maximum
segment size , except possibly the last segment.
Tuning and optimization 337
Tuning replication performance

Unexpected results
Multiple backup streams or restore streams can have the following unexpected
results:
■ Multistreaming can increase the load on content routers.
■ With some fast network or hardware configurations, multistreaming can slow
the performance of other backup or restore jobs.
For example, if the PureDisk metabase is on the same node as a content router,
the metabase import step can take longer. The content router might not have
completed its processing, because the spool is filled faster than the content
router empties it. This delay of the metabase import can lead to a longer total
backup time, but during that time, the client and the wire are idle.
■ Multistreaming can stop other agents from connecting to a content router
because each job stream needs one of the limited connections (limited to 300).
■ Client backup and restore performance might be affected if all parallel streams
do central-processor-bound fingerprint calculations.
■ If you run too many parallel backup streams, you can overload the client
system. When this occurs, the system generates the following error: Unable
to resolve host name. PureDisk generates this error because each backup
stream has its own content router context. Because of the unique contexts,
each stream makes DNS address lookups for the content router address(es).
If the content router does not get a response from DNS lookups in one minute,
it reports this error.
If backups have many streams, DNS lookups can fail when the central processor
usage is high and some streams are not rescheduled fast enough.
■ If you add multistreaming after the CPU/wire is full, you see no performance
increase.

Tuning replication performance


PureDisk lets you tune replication performance. All tuning is accomplished through
changing configuration parameters on the source storage pool. You can set
PureDisk parameters to tune the following aspects of replication:
■ Parallelism. To increase parallelism in PDDO replication, set the maxstreams
parameter to a larger value. The higher the value of the maxstreams parameter,
the more parallelism is obtained. The valid range is from 1 to 10 streams.
■ Bandwidth. If you need to limit the bandwidth used between storage pools
during replication, you can set the bandwidthlimit parameter. PureDisk uses
only the amount of bandwidth specified in this parameter. Every stream uses
338 Tuning and optimization
Tuning replication performance

the specified bandwidth value. For example, if you use 4 streams and a
bandwidth of 200 KB, PureDisk uses 800 KB of bandwidth between storage
pools during replication.
■ Network errors. If you want to minimize the effect of network errors on
replication, you can use the following parameters: maxretrycount, sleeptime,
and maxsleeptime.
For example, the replication process might not be able to complete because of
network errors. In this case, you can specify the number of times that the
source storage pool needs to wait before trying to send the data again to the
destination storage pool. Depending on network conditions, you might specify
to wait a few seconds or wait a few minutes.
To tune replication performance
1 Log into the Web UI of the source storage pool.
2 Click Settings > Configuration > Configuration File Templates > PureDisk
Server Agent > Default ValueSet for PureDisk Server Agent > replication.
3 Change the values for one or more of the following configuration parameters
under replication:
■ bandwidthlimit. Specifies the bandwidth limit, between the storage pools,
for the replication activity, in KB/sec. If you set this parameter to zero (0),
you specify unlimited bandwidth. The default is 0 (disabled).
■ maxretrycount. Specifies the maximum number of times that the PureDisk
replication workflow can attempt to send data. Specify any positive integer.
The default is 10.
■ maxsleeptime. Specifies the maximum amount of time, in seconds, that
the replication workflow is permitted to sleep between retries. Specify
any positive integer. The default is 60.
■ maxstreams. Specifies the maximum number of PDDO replication streams
per replication job. The maximum value is 10; if you set this parameter
to a value greater than 10, PureDisk uses 10 streams per replication job.
Specify any positive integer. The default is 4.
■ sleeptime. Specifies the number of seconds that the replication workflow
can sleep between 2 retries. Specify any positive integer. The default is
10.

The delayeddosize and maxsocachesize parameters also affect replication, but


Symantec recommends that you do not reset these parameters without the
assistance of Symantec Technical Support staff.
For information about how to edit the configuration file parameters, see the
following:
Tuning and optimization 339
Tuning replication performance

See “About the configuration files” on page 321.


340 Tuning and optimization
Tuning replication performance
Appendix A
Installing the clustering
software
This appendix includes the following topics:

■ About the Veritas Cluster Server (VCS) software installation

■ (Conditional) Examining the NICs for the private heartbeats

■ Synchronizing passwords

■ Installing the Veritas Cluster Server (VCS) software

■ Configuring VCS

■ (Conditional) Using YaST to create the storage partitions

About the Veritas Cluster Server (VCS) software


installation
The information in this topic describes how to install VCS manually during a
clustered storage pool disaster recovery. Do not use the information in this topic
for PureDisk 6.6 initial installations or for upgrades. The information in the
PureDisk Storage Pool Installation Guide describes how to install and upgrade
PureDisk storage pools, and it presents the PureDisk 6.6 wizard-based automated
installation method.
The following figure shows an example cluster:
Figure A-1.
You can refer to this figure for terminology and general information when you
install the VCS software on the failed nodes. The figure shows the NIC information
342 Installing the clustering software
About the Veritas Cluster Server (VCS) software installation

for each node, explains each node's role in the storage pool, shows the public
network, and shows the heartbeat networks.

Figure A-1 Example three-node cluster

First public network node 1 - all-in-one


NIC 1:
First heartbeat network Host IP: 100.100.100.100
Host FQDN: node1.acme.com
Second heartbeat Virtual IP: 100.100.100.101
Virtual FQDN: node1v.acme.com
network
NIC 2:
Attached to private network. No IPs.
NIC 3:
Attached to private network. No IPs.

node 2 - metabase engine +


content router
NIC 1:
Host IP: 100.100.100.110
Host FQDN: node2.acme.com
Virtual IP: 100.100.100.111
Virtual FQDN: node2v.acme.com
NIC 2:
Attached to private network. No IPs.
NIC 3:
Attached to private network. No IPs.

node3 - Spare
NIC 1:
Host IP: 100.100.100.120
Host FQDN: node3.acme.com
Virtual IP:
Virtual FQDN:
NIC 2:
Attached to private network. No IPs.
NIC 3:
Attached to private network. No IPs.

SAN

The following describe how to install the VCS software on a failed cluster node:
■ See “(Conditional) Examining the NICs for the private heartbeats” on page 343.
Installing the clustering software 343
(Conditional) Examining the NICs for the private heartbeats

■ See “Synchronizing passwords” on page 347.


■ See “Installing the Veritas Cluster Server (VCS) software” on page 351.
■ See “Configuring VCS” on page 360.
■ See “(Conditional) Using YaST to create the storage partitions” on page 365.

(Conditional) Examining the NICs for the private


heartbeats
Perform these procedures in the following situations:
■ If you do not have this storage pool's cluster planning spreadsheet.
■ If you installed new NICs in the failed nodes as part of your disaster recovery.
PureDisk requires that two NICs in each node be configured without addressing.
These NICs are for the cluster’s private heartbeat. The following procedures
explain how to examine the NICs in each node and remove the addressing, if
needed, from two of the NICs.

Examining the NICs in this node for addressing


This procedure explains how to examine each NIC for existing addresses. This
procedure also explains how to remove the addresses from the two NICs needed
for the private heartbeat.
For each node that you want to include in a cluster, PureDisk requires that two
NICs be configured without any addressing. These two NICs comprise the private
network that VCS uses to monitor the cluster’s heartbeat.
Repeat the following procedure for each node.
To examine the NICs
1 Log into one of the nodes.
2 Type the ip a(8) command to retrieve the NIC information for this node.
The output of the ip a(8) command includes each NIC’s MAC address in the
link/ether field.
344 Installing the clustering software
(Conditional) Examining the NICs for the private heartbeats

3 Examine the ip a(8) command output.


The following two examples show a correctly configured node and an
incorrectly configured node.
Example 1 - correctly configured NICs:

# ip a
1: lo: <LOOPBACK,UP> mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 100
link/ether 00:11:43:e4:0b:2a brd ff:ff:ff:ff:ff:ff
inet 10.80.92.102/21 brd 10.80.95.255 scope global eth0 # << note the IP addr
inet6 fec0::80:211:43ff:fee4:b2a/64 scope site dynamic
valid_lft 2591999sec preferred_lft 604799sec
inet6 fe80::211:43ff:fee4:b2a/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:11:43:e4:0b:2b brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop qlen 1000
link/ether 00:04:23:b0:4f:86 brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop qlen 1000
link/ether 00:04:23:b0:4f:87 brd ff:ff:ff:ff:ff:ff
6: sit0: <NOARP> mtu 1480 qdisc noop
link/sit 0.0.0.0 brd 0.0.0.0

Example 2 - incorrectly configured NICs:

# ip a
1: lo: <LOOPBACK,UP> mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 100
link/ether 00:11:43:e4:0b:2a brd ff:ff:ff:ff:ff:ff
inet 10.81.92.102/21 brd 10.80.95.255 scope global eth0 # << note the IP addr
inet6 fec0::80:211:43ff:fee4:b2a/64 scope site dynamic
valid_lft 2591999sec preferred_lft 604799sec
inet6 fe80::211:43ff:fee4:b2a/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 100
Installing the clustering software 345
(Conditional) Examining the NICs for the private heartbeats

link/ether 00:11:43:e4:0b:2a brd ff:ff:ff:ff:ff:ff


inet 10.82.92.102/21 brd 10.80.95.255 scope global eth1 # << note the IP addr
inet6 fec0::80:211:43ff:fee4:b2a/64 scope site dynamic
valid_lft 2591999sec preferred_lft 604799sec
inet6 fe80::211:43ff:fee4:b2a/64 scope link
valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 100
link/ether 00:11:43:e4:0b:2a brd ff:ff:ff:ff:ff:ff
inet 10.83.92.102/21 brd 10.80.95.255 scope global eth2 # << note the IP addr
inet6 fec0::80:211:43ff:fee4:b2a/64 scope site dynamic
valid_lft 2591999sec preferred_lft 604799sec
inet6 fe80::211:43ff:fee4:b2a/64 scope link
valid_lft forever preferred_lft forever
5: eth3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:11:43:e4:0b:2b brd ff:ff:ff:ff:ff:ff
6: sit0: <NOARP> mtu 1480 qdisc noop
link/sit 0.0.0.0 brd 0.0.0.0

In example 2, you can use eth0 for the node’s public NIC. You need to remove
the addressing from eth1 and eth2 so you can use these NICs for the private
heartbeat.
4 Record the NIC information and the MAC address information from the ip
a(8) command output.

The YaST interface identifies each NIC by its MAC address. You need to know
the MAC addresses of the two NICs that you want to use for the private
heartbeat.
Make sure you gathered this information and recorded it on the planning
spreadsheet, PureDisk_ClusterPlanning.xls.
5 Proceed with the installation as follows, depending on your situation:

Proceed to ... Circumstance

Step 1 If the following conditions are true, the


addressing on this node is correct:
of this procedure.
■ Only one NIC on this node is
configured with an IP address.
■ The two other NICs on this node are
not configured with addressing.

Go back to the following step and repeat


this procedure for the next node:

■ Step 1
346 Installing the clustering software
(Conditional) Examining the NICs for the private heartbeats

Proceed to ... Circumstance

See “(Conditional) Removing addressing If you installed new NICs in the failed
from the private heartbeat NICs” nodes as part of your disaster recovery.
on page 346.

See “Synchronizing passwords” If the NICs for each node's private


on page 347. heartbeat are configured without
addressing and you do not have to edit the
/etc/hosts file to configure
communication.

(Conditional) Removing addressing from the private heartbeat NICs


Perform the following procedure if you installed new NICs in the failed nodes as
part of your disaster recovery. This procedure removes addressing from one or
two NICs in this node.
To remove addressing
1 Launch YaST and display the Network Card Configuration Overview screen.
Perform the following steps, to launch the YaST interface:
■ At a command prompt, type the following command to launch the SUSE
Linux YaST configuration tool.

# yast

■ In the YaST Control Center main screen, select Network Devices >
Network Card.
■ On the Network Setup Method Screen, select Traditional Method with
ifup and select Next.

2 On the Network Card Configuration Overview screen, highlight one of the


NICs that you want to use for the heartbeat and select Edit.
In the bottom pane, the Device Name field identifies each NIC by its MAC
address. Select one of the secondary NICs in this node. Do not select the
primary NIC upon which you have configured the host address (the
administrative address) as part of the PDOS installation procedure.
3 On the Network Address Setup screen, select None Address Setup and select
Next.
4 Repeat the following steps to configure None Address Setup on the second
private NIC:
Installing the clustering software 347
Synchronizing passwords

■ Step 2
■ Step 3

5 Select Finish.
6 Select Quit.
7 Proceed to one of the following, depending on your situation:

Proceed to ... Circumstance

See “(Conditional) Examining the NICs for If you need to log into another node and
the private heartbeats” on page 343. configure the private heartbeat NICs in
that node without any addressing.

See “Synchronizing passwords” If the NICs for each node's private


on page 347. heartbeat are configured without
addressing and you do not have to edit the
/etc/hosts file to configure
communication.

Synchronizing passwords
Perform the following procedures to synchronize and distribute the passwords
on all the nodes in your cluster.

Generating the authentication key on each node


The following procedure explains how to generate an authentication key from
the node that you want to use as the storage pool authority node.
To generate an authentication key
1 Log into the node you want to use as the storage pool authority node.
2 Type the following ssh-keygen(1) command and respond to its prompts:

# ssh-keygen -t rsa

This command creates an authentication key. It issues several prompts, as


follows:
■ Press Enter at the following prompt to save the file in the default location:

Generating public/private rsa key pair.


Enter file in which to save the key (/root/.ssh/id_rsa):
348 Installing the clustering software
Synchronizing passwords

■ Press Enter to create an empty pass phrase at the following prompt:

Enter passphrase (empty for no passphrase):

■ Press Enter again to confirm the empty pass phrase at the following
prompt:

Enter same passphrase again:

■ Read the command summary. For example:

Your identification has been saved in /root/.ssh/id_rsa.


Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
85:66:10:cc:c8:6a:b3:c0:6a:69:85:32:5c:ed:fb:64 root@sr2e

Make sure to issue this command on each node in the cluster.


3 Log into another node.
4 Repeat the following steps until you have issued the ssh-keygen(1) command
on each node in the cluster:
■ Step 2
■ Step 3

Note: Make sure to issue the ssh-keygen(1) command on each node.

5 Proceed to the following:


See “Collect the SSH public keys” on page 348.

Collect the SSH public keys


The following procedure configures the trust that the VCS installer needs before
it can push the software to all the nodes in the cluster.
Installing the clustering software 349
Synchronizing passwords

To collect the secure keys


1 Log into the node that you want to use for the storage pool authority node.
2 Issue the following ssh(1) command from this node to generate a public key
and write it to a file:

ssh FQDN_of_this_node cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

For example:

node1:# ssh node1.acme.com cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

This command initiates a dialog. Depending on your environment, this


command can issue different questions. Respond with y or yes to the prompts,
and enter the root password for the node in response to the password prompts.
For example, respond with y or yes if the commands detect preexisting key
files and ask whether to overwrite them.
3 Issue the following ssh(1) command from this node to another node:

ssh FQDN_of_another_node cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

4 Repeat the following step for each node in the cluster:


■ Step 3
Your goal is to issue the ssh(1) command from this node to all the other nodes.
For example, when finished, the following commands might appear on your
screen for a three-node cluster:

node1:# ssh node1.acme.com cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys


node1:# ssh node2.acme.com cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
node1:# ssh node3.acme.com cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

5 Proceed to the following:


See “Distributing the key file” on page 349.

Distributing the key file


The following procedure explains how to distribute the key. This procedure ensures
that you can upgrade the cluster’s software from whatever node happens to be
the storage pool authority node in the future.
350 Installing the clustering software
Synchronizing passwords

To distribute the key to the other nodes


1 Use the scp(1) command to distribute the authorized_keys file to all the
other nodes.
For example:

scp ~/.ssh/authorized_keys root@node:~/.ssh/authorized_keys

For node, specify the host FQDN of one of the other nodes.
2 Repeat the following step and issue the scp(1) command to copy the key file
to each of the other nodes:
■ Step 1

3 Proceed to one of the following:


■ If you want to verify the SSH access, proceed to the following:
See “(Optional) Verifying the SSH access” on page 350.
■ If you want to start installing the VCS software, proceed to the following:
See “Installing the Veritas Cluster Server (VCS) software” on page 351.

(Optional) Verifying the SSH access


The following procedure explains how to verify that SSH can access the local node
in addition to other nodes without encountering password prompts.
To verifying SSH access
1 Log into any one of the nodes.
2 Type the following command to test connectivity:

ssh host_FQDN_of_this_node w

For example, you can issue the following command from the storage pool
authority node in the cluster:

# ssh node1.acme.com w

3 (Conditional) Respond yes to the following prompt if it appears:

Are you sure you want to continue connecting (yes, no) ? yes

4 Analyze the output from this command as follows:


■ In a correctly configured cluster, the output from this command includes
information about users, the system load, and so on.
Installing the clustering software 351
Installing the Veritas Cluster Server (VCS) software

■ In an incorrectly configured cluster, the output from this command is a


prompt for a password.

5 Type additional ssh(1) commands from this node to each of the other nodes.
Use the ssh(1) command format from the following step, but specify the host
FQDN of another node:
■ Step 2

6 Repeat the following step for each of the other nodes in the cluster:
■ Step 5

7 Log into another node.


8 Repeat the following steps for each of the other nodes in the cluster:
■ Step 2
through
■ Step 6

9 Proceed to the following:


See “Installing the Veritas Cluster Server (VCS) software” on page 351.

Installing the Veritas Cluster Server (VCS) software


The following procedures explain how to install the VCS software.

Installing VCS 4.1 MP3


Perform this procedure on the node you want to use as the storage pool authority.
The installer automatically pushes the software to all nodes and installs the
software on all nodes.
The following notes pertain to this procedure and are explained at the appropriate
steps in the procedure:
■ This procedure uses the VCS menu-driven installer. After you make a selection
on the menu, press Enter. Many of the prompts propose a default. If you want
to accept the default, press Enter.
■ PDOS does not use all of the services on the VCS disk, so do not install all
services. For example, do not respond with Y, y, or Enter when the script
prompts you to install the following VCS services:
■ VRTSvcsApache
■ VRTScssim
352 Installing the clustering software
Installing the Veritas Cluster Server (VCS) software

PureDisk does not use these VCS services.


■ Do not respond with Y, y, or Enter when the script prompts you to configure
VCS. A later procedure requires you to configure VCS with a configuration
script that is tuned to PureDisk.

Caution: Read the instructions that accompany each step in the following
procedure. You cannot install VCS in a PureDisk environment by pressing Y, y, or
Enter in response to each prompt. The result is an installation that is not
compatible with PureDisk. The procedure explains when to decline installation
of unnecessary or incompatible components.

To install VCS 4.1 MP3


1 Insert the PureDisk software disk into a drive on that is capable of reading a
DVD.
You can insert the disk into a drive that is attached to the storage pool
authority or into a drive that is attached to the failed node.
2 (Conditional) Type commands to mount the disk.
For example:

# mkdir /cdrom
# mount /dev/cdrom /cdrom

On a few hardware platforms, the cdrom device /dev/cdrom is not available


by default. If your device cannot find /dev/cdrom, see the troubleshooting
information in the PureDisk Storage Pool Installation Guide.
3 Type the following command to change to the mount directory:

# cd /cdrom/vcsmp3

4 Type the following command to start the installer:

# ./installer
Installing the clustering software 353
Installing the Veritas Cluster Server (VCS) software

5 Respond to the script’s preliminary prompts as the following table describes:

Prompt Recommended action Notes

Enter a Selection : Type I and press Enter.. Choice I indicates that you want to
[I,C,L,P,U,D,Q,?] install a product.

Select a product to Type 1 and press Enter.. Choice 1 indicates that you want to
install: [1-5,b,q] install the Veritas Cluster server.

Enter the system names Type the unique host FQDNs for each Specify the FQDNs of the nodes you
separated by spaces on node in the cluster. Use a space to need to recover. Do not specify the
which to install VCS: separate each FQDN. FQDNs of the nodes you do not need
to recover.

Specify these as a space-separated list


of FQDNs. Type the node FQDNs as
you specified them in the
/etc/hosts file. For example:
node1.acme.com
node2.acme.com
node3.acme.com

Initial system check Press Enter.. The installer generates messages


completed regarding a preinstallation system
successfully. check. Press Enter after the final
message, which indicates success.

VERITAS Press Enter.. The installer generates messages


Infrastructure rpms regarding RPMs. Press Enter after
installed the final message, which indicates
successfully. success.

6 Respond to the script’s licensing prompts as the following table describes:

Prompt Recommended action Notes


354 Installing the clustering software
Installing the Veritas Cluster Server (VCS) software

Do you want to enter Press Enter. The PDOS installer installed the
another license key for license keys that you need for VCS.
(Make sure to press Enter in
node? [y,n,q,?] (n) The VCS installer checks the license
response to the prompt for each
keys on all nodes in the cluster in
node.)
sequence and prompts you to install
more keys.

Take one of the following actions to


decline the opportunity to install
more keys:

■ Press Enter.
■ Type n.
■ Type no.

You do not need to install more keys.

VCS licensing verified Press Enter.


successfully.
Press [Enter] to continue:

VCS licensing completed Press Enter.


successfully.

7 Respond to the script’s package prompts regarding optional packages as the


following table describes:

Prompt Recommended action Notes

Select the optional rpms Type 3 and press Enter. View the RPM descriptions and select
to be installed on all optional RPMs.
systems? [1-3,q,?] (1)

Do you want to install Press Enter. Press Enter to install the VRTSvcsmn
the VRTSvcsmn rpm on all package.
systems? [y,n,q] (y)

Do you want to install Type n and press Enter. Do not install the VRTSvcsApache
the VRTSvcsApache rpm on package.
all systems? [y,n,q] (y)

Do you want to install Press Enter. Press Enter to install the VRTSvcsdc
the VRTSvcsdc rpm on all documentation package.
systems? [y,n,q] (y)
Installing the clustering software 355
Installing the Veritas Cluster Server (VCS) software

Do you want to install Press Enter. Press Enter to install the VRTScscm
the VRTScscm rpm on all Cluster Manager Java console
systems? [y,n,q] (y) package. Symantec does not support
the Cluster Manager Web console on
PDOS platforms.

Do you want to install Type n and press Enter. Do not install the Veritas Cluster
the VRTScssim rpm on all Server Simulator package.
systems? [y,n,q] (y)

Press [Enter] to continue: Review the information that appears The installer displays the list of
and press Enter. packages to install after the following
heading:

Installer will install the


following VCS rpms:

Review the list of packages. The


following packages should NOT be on
the list:

■ VRTSvcsApache
■ VRTScssim

Press [Enter] to continue: Review the information that appears The installer displays the output from
and press Enter. installation checks after the following
heading:

Checking system
installation requirements:

Are you ready to Type n and press Enter. Do not configure VCS at this time.
configure VCS?
[y,n,q] (y)

Would you like to install Press Enter. Press Enter to install the cluster
Cluster Server on all server on all nodes simultaneously.
systems simultaneously?
[y,n,q,?] (y)

Cluster Server Review the displayed information and The installer displays progress
installation completed press Enter. messages.
successfully.

Press [Enter] to continue:


356 Installing the clustering software
Installing the Veritas Cluster Server (VCS) software

Press [Enter] to continue: Press Enter. several times in You do not need to take any action to
response to the PERSISTENT_NAME correct your configuration because
messages. persistent naming is guaranteed
through /etc/udev/rules.d/
30-net_persistent_names.rules.

You can safely ignore any warning


messages that pertain to
PERSISTENT_NAME.

The README.1st file has Press Enter Decide whether to read the README
more information about file and respond accordingly.
or
VCS. Read it Now? [y,n,q]
(y) Type n and press Enter.

8 Enter commands to unmount and eject the PureDisk software disk.


For example:

# cd /
# umount /cdrom
# eject

9 Remove the disk.


10 Proceed to the following:
See “Installing VCS 4.1 MP4 and VCS 4.1 MP4RP3” on page 356.

Caution: Do not reboot now. Install VCS 4.1 MP4 before you reboot.

Installing VCS 4.1 MP4 and VCS 4.1 MP4RP3


The following procedure explains how to install VCS 4.1 MP4 and VCS 4.1 MP4RP3.
Installing the clustering software 357
Installing the Veritas Cluster Server (VCS) software

To install VCS 4.1 MP4 and VCS 4.1 MP4RP3


1 Put the PureDisk software disk into a drive that is capable of reading a DVD.
You can insert the disk into a drive that is attached to the storage pool
authority or into a drive that is attached to the failed node.
2 Type the following commands to mount the disk:

# mount /dev/cdrom /cdrom


# cd /cdrom/vcsmp4

On a few hardware platforms, the cdrom device /dev/cdrom is not available


by default. If your device cannot find /dev/cdrom, see the troubleshooting
information in the PureDisk Storage Pool Installation Guide.
3 Type the following command to start the installer:

# ./installmp

4 Respond to the script’s prompts regarding the cluster as the following table
describes:

Prompt Recommended action Notes

Enter the system names Type the unique host FQDNs for each Specify the FQDNs of the nodes you
separated by spaces on node in the cluster. Use a space to need to recover. Do not specify the
which to install VERITAS separate each FQDN. FQDNs of the nodes you do not need
Maintenance Pack 4: to recover.

Specify a space-separated list of


FQDNs. Symantec recommends that
you specify FQDNs. You also can
specify IP addresses. Type the node
FQDNs as you specified them in the
/etc/hosts file. For example:
node1.acme.com
node2.acme.com
node3.acme.com

Press [Enter] to continue: Press Enter. The installer displays the output from
communication checks after the
following heading:

Checking system
communication:
358 Installing the clustering software
Installing the Veritas Cluster Server (VCS) software

5 Respond to the script’s prompts regarding node-specific installation


characteristics as the following table describes:

Prompt Recommended action Notes

Type y and press Enter. When the installer checks the


upgrade requirements, it generates
warning messages about the following
packages:

■ VRTSvxvmcommon
■ VRTSvxvmplatform
■ VRTSvxfscommon
■ VRTSvxfsplatform

Type y to continue with the


installation in response to the
message about each of these
packages.

Some of these messages indicate that


a package of a more recent version
(4.1.40.00) is installed. These
messages appear because of a PDOS
patch. Type y and press Enter to
continue in response to each prompt.

Do you want to stop these Press Enter. The installer prompts you to stop the
processes and install processes on each node. Confirm the
patches on node? stop for each node.
[y,n,q] (y)

Note: The script repeats these questions for each node in the cluster. Respond
to the prompts for each node.
Installing the clustering software 359
Installing the Veritas Cluster Server (VCS) software

6 Respond to the script’s prompts regarding completion as follows:

Prompt Recommended action Notes

Press [Enter] to continue: Press Enter. The installer displays the output from
communication checks after the
following heading:

Upgrade requirement checks


completed successfully.

Would you like to upgrade Press Enter. You want to install the cluster server
VERITAS Maintenance Pack upgrades on all nodes simultaneously.
4 rpms on all systems
simultaneously?
[y,n,q,?] (y)

VERITAS Maintenance Pack Press Enter. This step completes the installation.
4 installation completed
successfully.

Press [Enter] to continue:

7 Log into one of the failed nodes as root.


8 Type the following command to install VCS 4.1 MP4RP3:

# rpm -Uvh /opt/pdinstall/lib/vcs/MP4RP3/*.rpm

9 Repeat the following steps to install VCS 4.1 MP4RP3 on all failed nodes:
■ Step 7
■ Step 8

10 Unmount and eject the CD.


For example:

# cd /
# umount /cdrom
# eject

11 Proceed to the following:


See “Configuring VCS” on page 360.
360 Installing the clustering software
Configuring VCS

Configuring VCS
Use the following procedure to configure VCS. The first steps require you to gather
information about the cluster. You need to specify unique information for this
cluster.
To configure VCS
1 Refer to this storage pool's cluster planning spreadsheet to confirm both the
unique name for this cluster and the cluster ID number.

Note: The cluster ID number must be unique on your public network. Conflicts
with existing cluster IDs can generate unpredictable results. The cluster ID
number you specify in this procedure must be the same as the cluster ID that
the storage pool already uses.

2 Log into the node that you want to configure as the storage pool authority
node.
3 Change to the directory where the installation software resides.
For example:

# cd /opt/VRTS/install

4 Enter the following command to start the installation program:

# ./installvcs -configure
Installing the clustering software 361
Configuring VCS

5 Respond to the script's prompts.


In your responses, specify all nodes. Include the nodes you need to recover,
the nodes you do not need to recover, and the passive nodes.
Respond to the script’s preliminary prompts as the following table describes:

Prompt Recommended action Notes

Enter the system names Type the unique host FQDNs for each Specify a space-separated list of
separated by spaces on node in the cluster. Use a space to FQDNs. Type the node FQDNs as you
which to configure VCS: separate each FQDN. specified them in the /etc/hosts
file. For example: node1.acme.com
node2.acme.com
node3.acme.com

Press [Enter] to continue: Press Enter. The installer displays the output from
communication checks after the
following heading:

Checking system
communication:

VCS licensing verified Press Enter. Do not enter additional license keys.
successfully.

Press [Enter] to continue:

Do you want to stop VCS Press Enter. Stop the VCS processes.
processes? [y,n,q] (y)

VCS processes are stopped Press Enter. The script stops all VCS processes.

Press [Enter] to continue:

No configuration changes Press Enter. The script displays information about


are made to the systems how to respond to its prompts.
until all configuration
questions are completed
and confirmed.

Press [Enter] to continue:


362 Installing the clustering software
Configuring VCS

6 Respond to the script’s cluster name and cluster ID configuration prompts


as the following table describes:

Prompt Recommended action Notes

Enter the unique cluster Type a unique name for this cluster. This name must be unique on your
name: network. You cannot include spaces
or numbers in this name.

Enter the unique Cluster Type a unique ID number for this This number must be unique on your
ID number between cluster. network.
0-65535: [b,?]

Caution: Before you proceed to the next step, make sure to record the unique
cluster name and the unique cluster ID for this cluster.

7 Specify heartbeat information and respond to the prompts about your


configuration that the installer detects.

Note: The installer’s prompts differ depending on your configuration. The


following is an example session.

For example:

Discovering NICs on node1 .........................................................


discovered eth0 inet6 eth1 inet6 eth2 sit0

Enter the NIC for the first private heartbeat link on node1: [b,?] eth0
eth0 is probably active as a public NIC on node1
Are you sure you want to use eth0 for the first private heartbeat link? [y,n,q,b,?] (n)
y
Would you like to configure a second private heartbeat link? [y,n,q,b,?] (y)
Enter the NIC for the second private heartbeat link on node1: [b,?] eth2
eth2 is probably active as a public NIC on node1
Are you sure you want to use eth2 for the second private heartbeat link? [y,n,q,b,?]
(n) y
Would you like to configure a third private heartbeat link? [y,n,q,b,?] (n)
Installing the clustering software 363
Configuring VCS

8 Press Enter to confirm that you do not want to configure a third private
heartbeat on the primary NIC.
For example:

Do you want to configure an additional low priority heartbeat link? [y,n,q,b,?] (n)

9 In response to the prompt regarding the NICs on the other nodes, type y or
n and press Enter.

When you type y, you affirm that each node contains NICs in the same order.
When you type n, you request that the system reissue the same prompts for
each node in the cluster. This prompt sequence differs for each installation
depending on your configuration.

Are you using the same NICs for private heartbeat links on all systems? [y,n,q,b,?] (y)
y

10 Press Enter to confirm the cluster information.


For example:

Cluster information verification:

Cluster Name: pdvcs1


Cluster ID Number: 1
Private Heartbeat NICs for node1: link1=eth0 link2=eth2
Private Heartbeat NICs for node2: link1=eth0 link2=eth2
Private Heartbeat NICs for node3: link1=eth0 link2=eth2

Is this information correct? [y,n,q] (y)


364 Installing the clustering software
Configuring VCS

11 Respond to the script’s closing questions as the following table describes:

Prompt Recommended action Notes

Do you want to configure Type n and press Enter. Do not configure Cluster Manager.
Cluster Manager (Web Symantec does not support the
Console) [y,n,q] (y) Cluster Manager on PDOS platforms.

Symantec supports the Cluster


Manager Java Console on PDOS
platforms, which you configure in a
later procedure.

Do you want to configure Type n and press Enter. Do not configure SMTP
SMTP notification?
[y,n,q] (y)

Do you want to configure Type n and press Enter. Do not configure SNMP.
SNMP notification?
[y,n,q] (y)

Press [Enter] to continue: Press Enter. several times in Press Enter to disregard all
response to the PERSISTENT_NAME PERSISTENT_NAME messages for
messages. each node and NIC.

You do not need to take any action to


correct your configuration because
persistent naming is guaranteed
through /etc/udev/rules.d/
30-net_persistent_names.rules.

You can safely ignore any warning


messages that pertain to
PERSISTENT_NAME.

Cluster Server configured Press Enter.


successfully.

Press [Enter] to continue:

Cluster Server was Press Enter.


started successfully.

Press [Enter] to continue:


Installing the clustering software 365
(Conditional) Using YaST to create the storage partitions

The README.1st file has Press Enter Decide whether to read the README
more information about file and respond accordingly.
or
VCS. Read it Now?
[y,n,q] (y) Type n and press Enter.

12 Type the following command to check the status of the open links:
# lltstat -n

The output returns a table that shows the link statuses. The output includes
a line for each node and a link for each private heartbeat on each node. The
LINKS field shows the number of private heartbeats per node. Each one should
be in the OPEN state.
13 Repeat the following step on each node in the clustered storage pool:
■ Step 12

14 Proceed to the following:


See “(Conditional) Using YaST to create the storage partitions” on page 365.

(Conditional) Using YaST to create the storage


partitions
Perform this procedure if the disks attached to the node you want to restore also
crashed. In this case, you need to recreate your storage partitions. If the disks
attached to the node you want to restore did not crash, you do not need to recreate
your storage partitions.
The following procedures explain how to use the YaST interface to create the
/Storage partition.

You need to create storage partitions on only the active nodes. Recreate the storage
partitions just as they were before the disaster. In other words, create a /Storage
partition, and, if necessary, create a /Storage/data partition and a
/Storage/databases partition, too. You do not need to create storage partitions
on a passive node.

Starting YaST
The following procedure explains how to start YaST.
366 Installing the clustering software
(Conditional) Using YaST to create the storage partitions

To invoke YaST
1 Log into one of the active nodes.
The active nodes are those upon which you want to install PureDisk services.
2 Type the following command to launch the SUSE Linux YaST configuration
tool:

# yast

You can type yast or YaST to invoke the interface. Do not type other
combinations of uppercase and lowercase letters.
3 In the YaST Control Center main screen, select System > Partitioner.
4 Select Yes on the warning pop-up.
5 On the Expert Partitioner screen, select VxVM.
6 (Conditional) Select Add Group.
You have the option to select Add Group only when at least one group is
configured currently.
7 On the Create a Disk Group pop-up, type a unique name for the disk group
on this node.
8 Select OK.
9 On the Veritas Volume Manager: Disks Setup screen, highlight a disk that
you want to include in the disk group.
10 Highlight Add Disk and press Return.
You can only add disks that are not yet partitioned. If you try to add a disk
with partitions, adding the disk to the disk group does not succeed. Delete all
partitions from the disk before you try to add partitions.
To delete all partitions on a disk, select Expert in the YaST interface and
select Delete Partition Table and Disk Label.
11 On the Add a name for the disk pop-up, type a name for the disk.
12 Select OK.
13 Repeat the following steps for all the disks that you want to include in the
disk group:
■ Step 9
■ Step 12
Installing the clustering software 367
(Conditional) Using YaST to create the storage partitions

14 Select Next.
15 Proceed to the following:
See “Creating the storage partitions” on page 367.

Creating the storage partitions


The following procedure explains how to create /Storage.
To create /Storage
1 On the Veritas Volume Manager: Volumes screen, select Add.
The Create Volume pop-up appears.
2 On the Create Volume pop-up, in the Volume Name field, specify Storage.

Note: Do not specify a size or a mount point.

3 Select OK.
4 Select Next.
5 (Conditional) Create a /Storage/data partition.
Perform the following steps to create a /Storage/data partition if this node
had a /Storage/data partition before the disaster:
■ On the Veritas Volume Manager: Volumes screen, select Add.
The Create Volume pop-up appears.
■ On the Create Volume pop-up, in the Volume Name field, specify
Storage_data.

Note: Do not specify a size or a mount point.

■ Select OK.
■ Select Next.

6 (Conditional) Create a /Storage/databases partition.


Perform the following steps to create a /Storage/databases partition if this
node had a /Storage/databases partition before the disaster:
■ On the Veritas Volume Manager: Volumes screen, select Add.
The Create Volume pop-up appears.
368 Installing the clustering software
(Conditional) Using YaST to create the storage partitions

■ On the Create Volume pop-up, in the Volume Name field, specify


Storage_databases.

Note: Do not specify a size or a mount point.

■ Select OK.
■ Select Next.

7 Select Apply.
8 On the Changes pop-up that appears, select Apply.
9 Select Quit.
10 Select Quit (again).
11 (Optional) Reboot the node.
Perform this step if you want to test it for reboot persistence.
12 Type the following command to view the disk summary for this node:
# vxdisk -o alldgs list
Appendix B
Command Line Interface
options for PureDisk
370 Command Line Interface options for PureDisk
General MAN page for PureDisk CLI

General MAN page for PureDisk CLI


General MAN page for PureDisk CLI – A text-based interface for managing
Symantec NetBackup PureDisk.

DESCRIPTION
NetBackup PureDisk offers customers a software-based data deduplication solution
that integrates with NetBackup. It provides customers with the critical features
required to protect all their data – from remote office to virtual environment to
datacenter. It reduces the size of backups with a deduplication engine that can be
deployed for storage reduction. It uses integration with NetBackup, for bandwidth
reduction using PureDisk clients. An open architecture allows customers to easily
deploy and scale NetBackup PureDisk using standard storage and servers.

NOTES
■ The command line interface commands are found only on the storage pool
authority in the /opt/pdcli/calls directory.
■ All man pages that are associated with the commands are located in the
/opt/pdcli/man directory.

■ Precede special symbols in arguments with an escape character. In the bash


shell, use the single quotation mark (') to accomplish that. See the example.
Example: '"any argument can fit in here, even arguments with a ¦
symbol"'

■ The command line interface commands can be used to script activities. Be sure
the first command that is entered in the script is the pdlogonuser command.
If you do not run pdlogonuser, you are prompted for a user name and password
before each command is executed.
■ The contents of all man pages are collected in a PDF format for offline viewing.
See the PureDisk Command Line Interface Guide.

PUREDISK COMMANDS GROUPED BY FUNCTION


General commands
■ pdactivateagent - Activates the agent software on a client computer.

■ pdbackup - Creates a backup job for the client specified.

■ pdbackupstop - Used to stop any running job.


Command Line Interface options for PureDisk 371
General MAN page for PureDisk CLI

■ pddeactivateagent - Deactivates the agent software on a client computer so


it is no longer backed up by active PureDisk policies.
■ pdexit - Removes any locally saved credentials.

■ pdexport2nbu - Exports a data selection to a NetBackup files list for use with
a NetBackup policy.
■ pdfindfiles - Used to find the files that have been backed up.

■ pdlogonuser - Saves the credentials locally to avoid interaction during calls.

■ pdpasswd - Used to set or change a user password.

■ pdrestore - Start a restore job from the specified parameters.

■ pdrunpolicy - Executes the specified policy.

■ pdstatlicensing - Collects and displays extra information about the license


keys.
■ pdupgrade - Used to initiate the upgrade of client software on the specified
client.
Create functions
■ pdcreatebackuppolicy - Creates a new backup policy.

■ pdcreatedataremovalpolicy - Creates a policy to remove data from a content


router.
■ pdcreatedepartment - Creates a new department that is used to organize client
systems.
■ pdcreateds - Creates a new selection of files and directories on a PureDisk
client for backup.
■ pdcreatedstemplate - Used to create data selection templates.

■ pdcreateeventescalation - Creates a new event escalation .

■ pdcreategroup - Creates a new group that is used to organize users with the
same permissions.
■ pdcreatelocation - Creates a new logical grouping for one or more
departments.
■ pdcreatembgarbagecollectionpolicy - Creates a new metabase garbage
collection policy.
■ pdcreatepolicyescalation - Creates a new policy escalation.

■ pdcreatepolicyescalationaction - Creates a new policy escalation action.


372 Command Line Interface options for PureDisk
General MAN page for PureDisk CLI

■ pdcreatereplicationpolicy - Creates a new replication policy.

■ pdcreatesmtpeventescalationaction - Creates an SMTP event escalation


action.
■ pdcreatesnmpeventescalationaction - Creates an SNMP event escalation
action.
■ pdcreateuser - Creates a new user within PureDisk that can be assigned rights
and permissions.
Delete functions
■ pddeleteagent - Deletes an agent from the PureDisk database.

■ pddeletedepartment - Deletes a department from the storage pool authority


(SPA).
■ pddeleteds - Deletes a data selection from a PureDisk policy.

■ pddeletedstemplate - Deletes a data selection template.

■ pddeleteeventescalation - Unbinds an event escalation action from the


agent or the storage pool.
■ pddeleteeventescalationaction - Deletes an SMTP action or SNMP action.

■ pddeletegroup - Deletes a user group from the storage pool authority (SPA).

■ pddeletejob - Raises an error and tries to kill the job. If the job is running, it
does not delete the job. If the job is not running, it deletes the job.
■ pddeletelicense - Deletes a license key.

■ pddeletelocation - Deletes a location from the storage pool authority (SPA).

■ pddeletepolicy - Deletes a policy.

■ pddeletepolicyescalation - Links a policy with a policy escalation action.

■ pddeletepolicyescalationaction - Deletes a policy escalation action.

■ pddeleteuser - Deletes a user from the storage pool authority (SPA).

Get functions
■ pdgetagent - Provides additional information about the agent object specified.

■ pdgetdepartment - Provides additional information about the department


object specified.
■ pdgetds - Provides additional information about the data selection object
specified.
Command Line Interface options for PureDisk 373
General MAN page for PureDisk CLI

■ pdgetdstemplate - Provides information about the data selection template


specified.
■ pdgeteventescalation - Provides information about the event escalation
specified.
■ pdgeteventescalationaction - Provides information about the event
escalation action specified.
■ pdgetgroup - Provides additional information about the group object specified.

■ pdgetjob - Provides additional information about the job object specified.

■ pdgetjobstat - Retrieves the job statistics from the PureDisk database.

■ pdgetjobsteps - Used to list the steps that are associated with the specified
job.
■ pdgetlicense - Collects information about the specified license key.

■ pdgetlocation - Provides additional information about the location object


specified.
■ pdgetpolicy - Provides additional information about the policy object specified.

■ pdgetpolicyescalation - Provides information about the policy escalation


object.
■ pdgetpolicyescalationaction - Provides information about the policy
escalation action.
■ pdgetstoragepool - Provides information about the storage pool.

■ pdgetuser - Provides information about the user object specified.

List functions
■ pdlistagent - Displays all agents that are associated with a particular PureDisk
environment.
■ pdlistdepartment - Displays all departments that are associated with a
particular PureDisk environment.
■ pdlistds - Displays all data selections that are associated with a particular
PureDisk environment.
■ pdlistdstemplate - Displays all the data selection templates.

■ pdlistevent - Displays all events that are associated with a particular PureDisk
environment.
■ pdlisteventescalation - Displays all the event escalations.
374 Command Line Interface options for PureDisk
General MAN page for PureDisk CLI

■ pdlisteventescalationaction - Displays a list of all the event escalation


actions.
■ pdlistgroup - Displays all the user groups that are associated with a particular
PureDisk environment.
■ pdlistjob - Displays all jobs that are associated with a particular PureDisk
environment.
■ pdlistlicense - Displays all the installed license keys.

■ pdlistlocation - Displays all the locations that are associated with a particular
PureDisk environment.
■ pdlistpolicy - Displays all the policies that are associated with a particular
PureDisk environment.
■ pdlistpolicyescalation - Displays the policy escalations that are attached
to a policy.
■ pdlistpolicyescalationaction - Displays all the actions that are attached
to a policy.
■ pdlistuser - Displays all the users that are associated with a particular
PureDisk environment.
Set functions
■ pdsetagent - Changes and updates the details that are associated with an
existing agent.
■ pdsetbackuppolicy - Change the parameters of an existing backup policy.

■ pdsetcrgarbagecollectionpolicy - Change the parameters of an existing


content router garbage collection policy.
■ pdsetdatalock - Resets the data lock password.

■ pdsetdataminingpolicy - Change the parameters of an existing data mining


policy.
■ pdsetdataremovalpolicy - Change the parameters of an existing data removal
policy.
■ pdsetdebugagent - Change the debugging parameters for the agent.

■ pdsetdepartment - Changes and updates the details that are associated with
an existing department.
■ pdsetds - Changes and updates the details that are associated with an existing
data selection.
Command Line Interface options for PureDisk 375
General MAN page for PureDisk CLI

■ pdsetdsremovalpolicy - Change the parameters of an existing data selection


removal policy.
■ pdsetdstemplate - Changes and updates a data selection template.

■ pdseteventescalationaction - Change the parameters of an existing


escalation action.
■ pdsetgroup - Changes and updates the details that are associated with an
existing user group.
■ pdsetlicense - Adds a license key.

■ pdsetlocation - Changes and updates the details that are associated with an
existing location.
■ pdsetmaintenancepolicy - Change the parameters of an existing maintenance
policy.
■ pdsetmbgarbagecollectionpolicy - Change the parameters of an existing
metabase garbage collection policy.
■ pdsetperm - Sets the permissions for a user.

■ pdsetpolicyescalationaction - Change the parameters of an existing policy


escalation action.
■ pdsetreplicationpolicy - Change the parameters of an existing replication
policy.
■ pdsetserverdbmaintenancepolicy - Change the parameters of an existing
server database maintenance policy.
■ pdsetstoragepool - Changes and updates the specified parameters for a
storage pool.
■ pdsetuser - Changes and updates the details that are associated with an
existing user.
376 Command Line Interface options for PureDisk
General MAN page for PureDisk CLI
Appendix C
Third-party legal notices
This appendix includes the following topics:

■ Third-party legal notices for Symantec NetBackup PureDisk

Third-party legal notices for Symantec NetBackup


PureDisk
Active Directory, Excel, Internet Explorer, Microsoft, Windows, Windows NT, and
Windows Server are either registered trademarks or trademarks of Microsoft
Corporation in the United States and other countries.
AIX, IBM, PowerPC, and Tivoli are trademarks or registered trademarks of
International Business Machines Corporation in the United States, other countries,
or both.
All SPARC trademarks are used under license and are trademarks or registered
trademarks of SPARC International, Inc., in the United States and other countries.
Products bearing SPARC trademarks are based upon an architecture developed
by Sun Microsystems, Inc.
AMD is a trademark of Advanced Micro Devices, Inc.
Firefox and Mozilla are registered trademarks of the Mozilla Foundation.
Intel, Itanium, Pentium, and Xeon are trademarks or registered trademarks of
Intel Corporation or its subsidiaries in the United States and other countries.
Java, Sun, and Solaris are trademarks or registered trademarks of Sun
Microsystems, Inc., in the United States and other countries.
Linux is a registered trademark of Linus Torvalds in the United States and other
countries.
Mac OS is a trademark of Apple Inc., registered in the U.S. and other countries.
378 Third-party legal notices
Third-party legal notices for Symantec NetBackup PureDisk

NetApp is a registered trademark of Network Appliance, Inc. in the U.S. and other
countries.
Novell and SUSE are registered trademarks of Novell, Inc., in the United States
and other countries.
OpenLDAP is a registered trademark of the OpenLDAP Foundation.
Red Hat and Enterprise Linux are registered trademarks of Red Hat, Inc., in the
United States and other countries.
UNIX is a registered trademark of The Open Group.
VMware, vSphere, and the VMware "boxes" logo and design are trademarks or
registered trademark of VMware, Inc., in the United States and other countries.
Third-party software may be recommended, distributed, embedded, or bundled
with this Symantec product. Such third-party software is licensed separately by
its copyright holder. All third-party copyrights associated with this product are
listed in the Third Party Legal Notices document, which is accessible from the
PureDisk Web UI.
Index

A C
Accessing a managed storage pool 303 central storage pool
ACL errors restore job statistic 224 disabling 302
ACL restore job statistic 224 enabling 301
activating a new PureDisk component 287 managing 303
Activating a server agent 92 testing connections 304
adding PureDisk components 280 clustering
authentication key 347, 349 administering a storage pool 310
Average restore rate restore job statistic 224 also see disaster recovery - clustered storage
pools 157
B configuration example 341
examining NICs for addressing 343
Backup job statistics 219
installing VCS 4.1 MP3 software 351
Backup speed backup job statistic 219
installing VCS 4.1 MP4 software 356
Backup time duration backup job statistic 219
removing addressing from NICs 346
Backup time duration PDDO backup job statistic 228
synchronizing passwords 347
Bytes deleted in source data selection replication job
VCS software requirements 341
statistic 227
configuration files
Bytes deleted on source backup job statistic 219
editing agent configuration files for large
Bytes modified in source data selection replication
backups 333
job statistic 227
editing ASCII files 326
Bytes modified on source backup job statistic 219
editing through Web UI 322
Bytes modified on target restore job statistic 224
pushing changes 325
Bytes new in source data selection replication job
updating agent configuration files 327
statistic 227
configuring
Bytes new on source backup job statistic 219
see reconfiguring PureDisk 321
Bytes new on target restore job statistic 224
configuring SPAR 201–202
Bytes not modified on source backup job statistic 219
copying a replication policy 68
Bytes received by agent restore job statistic 224
Bytes replicated replication job statistic 227
Bytes scanned during backup PDDO backup job D
statistic 228 dashboard reports 249
Bytes selected on source backup job statistic 219 data lock password
Bytes total restore job statistic 224 export to NetBackup 84
Bytes transferred backup job statistic 219 files tab 215
Bytes transferred replication job statistic 227 data mining reports 233
Bytes transferred to content router PDDO backup job data replication
statistic 228 copying replicated data 70
Bytes unmodified on target restore job statistic 224 managing replicated data selections 68
Bytes with errors restore job statistic 224 policies
Bytes with replication errors replication job copying and deleting 68
statistic 227 creating 61
380 Index

data replication (continued) external directory service authentication (continued)


policies (continued) verifying TLS 29
full replication 66
incremental replication 66 F
scheduling 66
file-type segmentation 336
viewing replication jobs 68
Files deleted on source backup job statistic 219
restoring replicated data 70
Files modified on source backup job statistic 219
understanding 59
Files modified on target restore job statistic 224
using replicated data 68
Files new on source backup job statistic 219
viewing replicated data selections 68
Files new on target restore job statistic 224
data selection
Files not modified on source backup job statistic 219
removal policies
Files selected on source backup job statistic 219
scheduling 42
Files unmodified on target restore job statistic 224
date replication policy
Files with errors restore job statistic 224
creating 62
editing 62
deleting a replication policy 68 G
Devices restore job statistic 224 Global data reduction factor backup job statistic 219
Directory count restore job statistic 224 Global data reduction saving PDDO backup job
disaster recovery statistic 228
backup procedures Global data reduction savings backup job statistic 219
creating backup or restore scripts 115
policies 106 H
troubleshooting 117 Hard links restore job statistic 224
using NetBackup 100–101
using scripts 100, 114
clustered storage pools I
DR_Restore_all script 175 international characters 272
restoring PureDisk 157 Items deleted in source data selection replication job
strategies 201 statistic 227
unclustered storage pools Items modified in source data selection replication
DR_Restore_all script 122 job statistic 227
restore examples 122 Items new in source data selection replication job
restoring PureDisk 121 statistic 227
DR_Restore_all script 122, 175 Items replicated replication job statistic 227
Items with replication errors replication job
statistic 227
E
Error count restore job statistic 224
export engine L
see NetBackup - export engine 73 license keys
external directory service authentication adding and removing 299
adding PureDisk groups 19 report generation 304
changing TLS specification 46 required 74
disabling 45 viewing in reports 259
introduction 18 log files 261
linking with PureDisk 33
maintaining PureDisk groups 43 M
modifying the base search path 46 Managed storage pool
synchronizing with PureDisk 40, 42 Accessing 303
Index 381

Media server cache hit percentage PDDO backup job processes


statistic 228 stopping and starting on a multinode PureDisk
metabase engine storage pool 318
rerouting 304 stopping and starting on a PureDisk node
multistreamed (parallel) backups 334 (clustered) 317
multistreamed (parallel) restores 335 stopping and starting on a PureDisk node
multistreamed replication 337 (unclustered) 314
PureDisk Web UI
N Starting an additional 303
NetBackup
disaster recovery 100–101 R
export engine 73 reconfiguring PureDisk
adding the service 280 editing agent configuration files for large
configuring 75 backups 333
exporting Files and Folders data editing configuration files 322
selections 85 overview 321
job failures 92 updating agent configuration files 327
point-in-time export 91 recovery from a disaster 121, 157
restoring 94 removing a content router 296
server agents and export jobs 92 replication
NICs see data replication 59
examining for existing addresses 343 see storage pool authority replication
removing addressing 346 (SPAR) 199
Replication job statistics 227
O Replication time duration replication job statistic 227
replication tuning 337
optimization 331, 337
reports
central storage pool 254, 301
P central storage pool test connection 304
password synchronization 347 dashboards 249, 254
PDDO backup job statistics 228 data mining 233
pdkeyutil utility 117 finished job 218
PDOS for a running job 215
changing the password 310 overview 214
performance optimization permissions 214
file-type segmentation 336 web service 242
multistreamed (parallel) backups 334 rerouting a metabase engine 304
multistreamed (parallel) restores 335 Restore job statistics 224
segmentation options for backup jobs 336 Restore time duration restore job statistic 224
segmentation threshold values 336 RestoreSPASIO command 209
policies restoring from NetBackup 94
data mining 233 retrieving information from a data mining policy 236
data replication 61
disaster recovery backup procedures 106
exports to NetBackup 85
S
segmentation options for backup jobs 336
NetBackup DataStore 91
segmentation threshold value 336
run once 66, 109
server agents
activating 92
382 Index

single port communication 49 storage pool management (continued)


single port communications 66 rerouting a metabase engine 304
SIS testing connections 304
see data deduplication 238 storage pools
Source bytes backed up backup job statistic 219 stopping and starting processes 318
Source bytes with errors backup job statistic 219 Symbolic links restore job statistic 224
Source files backed up backup job statistic 219
Source files with errors backup job statistic 219 T
SSH public keys 348, 350
Time-out interval, adjusting 314
Start date/time backup job statistic 219
Total files restore job statistic 224
Start date/time PDDO backup job statistic 228
troubleshooting
Start date/time replication job statistic 227
NetBackup export engine jobs 92
Start date/time restore job statistic 224
tuning 331, 337
Starting another PureDisk Web UI 303
Statistics for a backup job 219
Statistics for a PDDO backup job 228 U
Statistics for a replication job 227 Unique bytes backed up backup job statistic 219
Statistics for a restore job 224 Unique files and folders backed up backup job
Stop date/time backup job statistic 219 statistic 219
Stop date/time PDDO backup job statistic 228 Unique items received restore job statistic 224
Stop date/time replication job statistic 227 Unique items restored restore job statistic 224
Stop date/time restore job statistic 224 user authentication
stopping and starting processes root broker 254
multinode PureDisk storage pool 318
PureDisk node (clustered) 317 V
PureDisk node (unclustered) 314 VCS
storage pool authority replication (SPAR) configuring 360
disaster recovery strategies 201 installing VCS 4.1 MP3 software 351
enabling backups 204 installing VCS 4.1 MP4 software 356
example 199 preparing to install 341
RestoreSPASIO command 209 Verification failures restore job statistic 224
restoring 207 Verification successes restore job statistic 224
running a SPAR policy manually 206
upgrading PureDisk with SPAR enabled 211
storage pool management W
activating a new PureDisk service 287 web service reports 242
adding and removing license keys 299 Web UI
adding PureDisk components 280 Starting an additional 303
adjusting the clock on a PureDisk node 312
administering a clustered storage pool 310 Y
changing database or LDAP admin YaST 365
passwords 311
changing the PDOS password 310
disabling central reporting 302
enabling central reporting 301
increasing the number of client connections 311
license key report generation 304
managing in the central storage pool 303
removing a content router 296

También podría gustarte