Documentos de Académico
Documentos de Profesional
Documentos de Cultura
PureDisk™ Administrator's
Guide
Release 6.6.0.2
Legal Notice
Copyright © 2009 Symantec Corporation. All rights reserved.
Symantec, the Symantec Logo, and PureDisk are trademarks or registered trademarks of
Symantec Corporation or its affiliates in the U.S. and other countries. Other names may be
trademarks of their respective owners.
This Symantec product may contain third party software for which Symantec is required
to provide attribution to the third party (“Third Party Programs”). Some of the Third Party
Programs are available under open source or free software licenses. The License Agreement
accompanying the Software does not alter any rights or obligations you may have under
those open source or free software licenses. Please see the Third Party Legal Notice Appendix
to this Documentation or TPIP ReadMe File accompanying this Symantec product for more
information on the Third Party Programs.
The product described in this document is distributed under licenses restricting its use,
copying, distribution, and decompilation/reverse engineering. No part of this document
may be reproduced in any form by any means without prior written authorization of
Symantec Corporation and its licensors, if any.
THE DOCUMENTATION IS PROVIDED "AS IS" AND ALL EXPRESS OR IMPLIED CONDITIONS,
REPRESENTATIONS AND WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT,
ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO
BE LEGALLY INVALID. SYMANTEC CORPORATION SHALL NOT BE LIABLE FOR INCIDENTAL
OR CONSEQUENTIAL DAMAGES IN CONNECTION WITH THE FURNISHING,
PERFORMANCE, OR USE OF THIS DOCUMENTATION. THE INFORMATION CONTAINED
IN THIS DOCUMENTATION IS SUBJECT TO CHANGE WITHOUT NOTICE.
The Licensed Software and Documentation are deemed to be commercial computer software
as defined in FAR 12.212 and subject to restricted rights as defined in FAR Section 52.227-19
"Commercial Computer Software - Restricted Rights" and DFARS 227.7202, "Rights in
Commercial Computer Software or Commercial Computer Software Documentation", as
applicable, and any successor regulations. Any use, modification, reproduction release,
performance, display or disclosure of the Licensed Software and Documentation by the U.S.
Government shall be solely in accordance with the terms of this Agreement.
Symantec Corporation
350 Ellis Street
Mountain View, CA 94043
http://www.symantec.com
Technical Support
Symantec Technical Support maintains support centers globally. Technical
Support’s primary role is to respond to specific queries about product features
and functionality. The Technical Support group also creates content for our online
Knowledge Base. The Technical Support group works collaboratively with the
other functional areas within Symantec to answer your questions in a timely
fashion. For example, the Technical Support group works with Product Engineering
and Symantec Security Response to provide alerting services and virus definition
updates.
Symantec’s maintenance offerings include the following:
■ A range of support options that give you the flexibility to select the right
amount of service for any size organization
■ Telephone and Web-based support that provides rapid response and
up-to-the-minute information
■ Upgrade assurance that delivers automatic software upgrade protection
■ Global support that is available 24 hours a day, 7 days a week
■ Advanced features, including Account Management Services
For information about Symantec’s Maintenance Programs, you can visit our Web
site at the following URL:
www.symantec.com/techsupp/
Customer service
Customer service information is available at the following URL:
www.symantec.com/techsupp/
Customer Service is available to assist with the following types of issues:
■ Questions regarding product licensing or serialization
■ Product registration updates, such as address or name changes
■ General product information (features, language availability, local dealers)
■ Latest information about product updates and upgrades
■ Information about upgrade assurance and maintenance contracts
■ Information about the Symantec Buying Programs
■ Advice about Symantec's technical support options
■ Nontechnical presales questions
■ Issues that are related to CD-ROMs or manuals
Maintenance agreement resources
If you want to contact Symantec regarding an existing maintenance agreement,
please contact the maintenance agreement administration team for your region
as follows:
Symantec Early Warning Solutions These solutions provide early warning of cyber attacks, comprehensive threat
analysis, and countermeasures to prevent attacks before they occur.
Managed Security Services These services remove the burden of managing and monitoring security devices
and events, ensuring rapid response to real threats.
Consulting Services Symantec Consulting Services provide on-site technical expertise from
Symantec and its trusted partners. Symantec Consulting Services offer a variety
of prepackaged and customizable options that include assessment, design,
implementation, monitoring, and management capabilities. Each is focused on
establishing and maintaining the integrity and availability of your IT resources.
Educational Services Educational Services provide a full array of technical training, security
education, security certification, and awareness communication programs.
To access more information about Enterprise services, please visit our Web site
at the following URL:
www.symantec.com
Select your country or language from the site index.
Contents
Assumptions
The procedures you need to perform to configure external authentication assume
that you are familiar with how your site’s OpenLDAP or Active Directory service
is organized. The procedures also assume that your site’s directory service
administrator can provide you with information about how the directory service
is configured.
External directory service authentication 19
Obtaining directory service information
User accounts
The following information pertains to user accounts when external authentication
is enabled:
■ The Edit LDAP Server Configuration screen in the Web UI includes a checkbox
labeled Enable LDAP Authentication. When this box is checked, PureDisk
authenticates through an external directory service. When this box is
unchecked, PureDisk authenticates through its internal OpenLDAP directory
service. You cannot merge these directory services.
PureDisk can use either its internal directory service or your external directory
service, but it cannot use both at the same time. When PureDisk is configured
to authenticate through its internal directory service, only its local user
accounts are valid. However, when PureDisk is configured to use an external
directory service, only the accounts from that external directory service are
valid.
■ If the external service is down, you can authenticate from PureDisk’s internal
OpenLDAP service. For example, the external directory service may be down.
If you try to synchronize the external directory service, the job that runs the
system policy for syncing external ldap users fails.
■ If you want to add PureDisk users and groups, add them in your directory
service and import them into PureDisk. When authentication through an
external directory service is enabled, you cannot create users and groups
directly in PureDisk.
After you import users and groups from the external directory service, you
need to grant PureDisk permissions to those users and groups.
■ You cannot import a user with the root login property from an external
directory service. The root users for both PureDisk and for the external
directory service are always present and are always unique. By default, the
PureDisk root user’s permissions and privileges are always the same. They
remain the same regardless of whether authentication is through PureDisk’s
internal directory service or through an external directory service.
dc=com
dc=acme
ou=users
cn=Alice Munro
cn=Bob Cratchit
cn=Claire Clairmont
cn=Dave Bowman
ou=groups
cn=chicago
cn=atlanta
This directory service has two organizational units: users and groups.
You can use the ldapsearch(1) command to obtain a listing of this directory
service. The command to obtain a listing of users and groups is as follows:
22 External directory service authentication
Obtaining directory service information
# ldapsearch -H ldap://100.100.100.101:389 -x \
-D "cn=Alice Munro,ou=users,dc=acme,dc=com" -W \
-b dc=acme,dc=com "(objectClass=*)" > /tmp/example.txt
If more directory entries exist in the same directory subtree, a command such as
the preceding example returns information about more than users and groups.
The command writes its output to file example.txt. In the example file that
follows, characters in bold represent definitions from this file that you need later
in the configuration process:
# extended LDIF
#
# LDAPv3
# base <dc=acme,dc=com> with scope subtree # base search path
# filter: (objectClass=*)
# requesting: ALL
#
objectClass: user
objectClass: inetOrgPerson
cn: Bob Cratchit
sn: Cratchit
description: Bob's Description
givenName: Bob
distinguishedName: CN=Bob Cratchit,OU=users,DC=acme,DC=com
displayName: Bob Cratchit
memberOf: CN=chicago,OU=groups,DC=acme,DC=com
name: Bob Cratchit
sAMAccountName: bob.cratchit
userPrincipalName: bob.cratchit@acme.com
mail: bob.cratchit@acme.com
givenName: Dave
distinguishedName: CN=Dave Bowman,OU=users,DC=acme,DC=com
displayName: Dave Bowman
memberOf: CN=atlanta,OU=groups,DC=acme,DC=com
name: Dave Bowman
sAMAccountName: dave.bowman
userPrincipalName: dave.bowman@acme.com
mail: dave.bowman@acme.com
# search result
search: 7
result: 0 Success
# numResponses: 7
# numEntries: 6
dc=com
dc=marlins
ou=commuters
cn=Florence
Leeds
cn=Mary Evans
cn=Diana Goyer
cn=Adam Smith
cn=Eric Meyer
cn=Joe McKinley
ou=groups
cn=bikers
cn=drivers
This directory service has two organizational units: commuters and groups.
You can use the ldapsearch(1) command to obtain a listing of this directory
service. The command to obtain a listing of the users and groups is as follows:
# ldapsearch -H ldap://100.100.100.100:389/ -x \
-D "cn=Diana Goyer,ou=commuters,dc=marlins,dc=com" -W \
-b "dc=marlins,dc=com" "(objectClass=*)">/tmp/example.txt
This example writes its output to file example.txt. In the example that follows,
characters in bold represent the definitions that you need later in the configuration
process. The external directory service authentication configuration procedures
use examples from this listing. File example.txt is as follows:
# extended LDIF
#
# LDAPv3
# base <dc=marlins,dc=com> with scope subtree # base search path
# filter: (objectClass=*)
26 External directory service authentication
Obtaining directory service information
# requesting: ALL
#
# marlins.com
dn: dc=marlins,dc=com
dc: marlins
objectClass: domain
# commuters, marlins.com
dn: ou=commuters,dc=marlins,dc=com
ou: commuters
objectClass: organizationalUnit
# groups, marlins.com
dn: ou=groups,dc=marlins,dc=com
ou: groups
objectClass: organizationalUnit
userPassword:: cGFzc3dvcmQ=
# search result
search: 2
result: 0 Success
# numResponses: 12
# numEntries: 11
See “(Optional) Verify TLS and copy the CA certificate” on page 29.
■ Otherwise, proceed to the following topic:
See “Linking PureDisk to the external directory service” on page 33.
# ping mn.north.stars
PING mn.north.stars (100.100.100.101) 56(84) bytes of data.
64 bytes from mn.north.stars (100.100.100.101): icmp_seq=1 ttl=64 time=4.71 ms
64 bytes from mn.north.stars (100.100.100.101): icmp_seq=2 ttl=64 time=0.353 ms
ip_addr_of_external_directory_services common_name
100.100.100.101 mn.north.stars
# /usr/bin/openssl x509 -inform DER -outform PEM -in file.cer -out file.pem
file.cer The name of the file that contains the Active Directory
certificate. This file name ends in .cer. Obtain this file from
your site’s directory service administrator.
4 Use the openssl(1) command and the s_client(1) program to test the port
connections and to verify that the SSL certificate operates correctly.
Type the following command:
FQDN: port The directory service server’s FQDN and the port from which
you imported the certificate. This variable takes the format
FQDN:port . Specify the following values:
cert_loc Specify the absolute path to the certificate file. This file is the
one that you copied in step step 2.
5 Use the ldapsearch(1) command to test the connection between the storage
pool authority and the directory service server.
The connection needs to be open to allow continued authentication activities.
Type the command as follows:
This command has the following format:
For example:
# /usr/bin/ldapsearch -H ldaps://100.100.100.101:636 -x \
-D "cn=Alice Munro,ou=users,dc=acme,dc=com" -W \
-b ou=groups,dc=acme,dc=com "(objectClass=group)"
port The port the PureDisk uses for TLS communication. This
port is the one where the external OpenLDAP server runs
ldaps. By default, the value is 636.
uid The distinguished name of the test user with which to bind.
filter An object class name. The command searches for this object
class name as a test.
Configuring communication
The following procedure explains how to configure communication in the PureDisk
Web UI.
To configure communication in the PureDisk Web UI
1 Display the storage pool authority opening screen.
Open a browser window and type the following URL:
https://URL
For URL, specify the URL to the storage pool authority. For example, in an
all-in-one environment, this value is the URL of the PureDisk node upon
which you installed all the PureDisk software. For example,
https://acme.mnbe.com.
2 Type your user name and password at the prompts on the login screen.
3 Click Settings > Configuration.
4 In the left pane, click the plus (+) sign to the left of LDAP server.
5 Select External LDAP.
The LDAP Server Configuration properties appear in the right pane.
6 Complete the Connection tab.
See “Completing the Connection tab” on page 34.
7 Complete the Mapping tab.
See “Completing the Mapping tab” on page 37.
8 Enable user group management.
See “Managing user groups” on page 39.
3 Verify, and respecify if needed, the Port number that connects PureDisk to
the external OpenLDAP or Active Directory service.
The port to specify depends on whether TLS is enabled, as follows:
■ When TLS is enabled, the default security port is 636.
■ When TLS is not enabled, the default port is 389.
■ Type the following into the box under the Base Search path label:
dc=marlins,dc=com
■ Click Add.
External directory service authentication 37
Linking PureDisk to the external directory service
5 Click Save.
6 (Conditional) On the storage pool authority, edit the /etc/hosts file and add
a line that allows the storage pool authority service to resolve the directory
server using the name of this certificate.
Perform this step if you enabled TLS.
For example, assume that the common name of the certificate file is
blinkie.acme.com. Note that this string is not the FQDN of the server upon
which the directory resides. Add the following entry:
100.100.100.101 blinkie.acme.com
■ Click Add.
■ Repeat the preceding steps to add distinguished names for all the user
groups that PureDisk needs to authenticate.
■ Click Save.
If the external directory service is down at the time you click Save,
PureDisk generates the following message:
During the synchronization, PureDisk does not synchronize user passwords. The
passwords reside only in the external directory service files.
The PureDisk Web UI does not accept empty, bank passwords. Make sure that all
users you want to authenticate through an external directory service have a
nonblank password. A user with a blank password cannot log into the PureDisk
Web UI.
To enable the system policy for synchronization
1 Click Manage > Policies.
2 In the left pane, under Miscellaneous Workflows, click the plus sign (+) next
to External LDAP server synchronization.
3 Select System policy for Syncing external LDAP users.
4 Complete the General tab.
Note: The General tab and the Scheduling tab include a Save option on each
tab. Do not click Save until you complete the fields on each tab. If you click
Save before you complete each tab, it saves the specifications you made up
to that point and closes the dialog box. To complete more fields, open the
dialog again in edit mode.
For example, you might want to stop running this policy during a system
maintenance period. If you select Disabled, you do not need to enter
information in the Scheduling tab to suspend and then re-enable this
policy.
See “Enabling the PureDisk system policy that synchronizes PureDisk with an
external directory service” on page 40.
The following topics describe other changes you might need to make to your
authentication configuration:
See “Adding, changing, or deleting users or groups” on page 43.
See “Changing the youruserclass, yourloginattrib, or yournameattrib variables
in your directory service’s ldap.xml file” on page 44.
You can use the same procedure to add or delete user groups. The procedure is as
follows:
See “Managing user groups” on page 39.
44 External directory service authentication
Changing the youruserclass, yourloginattrib, or yournameattrib variables in your directory service’s ldap.xml file
13 Use the procedure in the following topic to add the user groups back:
See “Managing user groups” on page 39.
14 Run the system policy for synchronizing external directory service users
again.
The instructions for how to run this policy are in step 12.
5 Click Save.
6 Log in to the storage pool authority as root.
7 Type the following command to restart pdweb:
Entity IP address
Client 1 IP-out1
Client 2 IP-in3
■ Content router 1
■ Controller 1
■ Metabase engine 1
■ Metabase server
■ NetBackup export engine
■ Storage pool authority
■ Content router 2
■ Controller 2
■ Metabase engine 2
DNS name (in FQDN Client 1 (outside the Client 2 (inside the
format) firewall) firewall)
2 Configure the firewall to translate the IP addresses and outside ports to inside
addresses and inside ports.
Use your firewall software’s documentation to help you translate the ports.
In the example storage pool, the translations are as follows for the outside
ports:
IP-out2 443
IP-out3 443
IP-out4 443
IP-out5 443
IP-out6 443
IP-out7 443
In the example storage pool, the translations are as follows for the inside
ports:
IP-in1 443
IP-in1 10082
IP-in2 10082
IP-in1 10101
IP-in2 10101
IP-in1 10087
4 Repeat step 3 for each content router, metabase engine, metabase server,
storage pool authority, and NetBackup export engine service on every node.
Do not perform these steps for the metabase engines. For the metabase
engines, PureDisk updates the FQDN information automatically when you
update the storage pool authority and the controller, respectively.
5 Proceed to the following:
See “Creating a new department with single-port settings” on page 53.
Note: Make sure to add a new department. The single-port feature does not work
if you add a new location.
6 Click Add.
7 Proceed to the following:
See “Creating a new configuration template” on page 55.
5 Click the box to the left of the name of the new department.
For example, click to the left of ATOP Dept.
6 Click Assign.
7 Proceed to the following:
See “Specifying port number 443 as the default port in the configuration file
template” on page 56.
5 Under the expanded contentrouter > port, select All OS: portnumber.
6 In the Value field, type 443.
7 Click Save.
Single-port communication 57
Configuring single-port communication
8 Repeat the following steps to change the port value to 443 for the ctrl > tcport
field and the debug > dldport field:
Step 6
Step 7
9 Perform one of the following tasks:
■ (Conditional) Configure port 443 in replication policies
See “(Conditional) Configuring port 443 in replication policies” on page 57.
■ Install the agent software on the clients or move the clients
See “Installing agent software on the clients or moving clients” on page 58.
4 Repeat step 3 for each client that you need to move to the new department.
Chapter 3
Data replication
This chapter includes the following topics:
■ Replication jobs
■ Tuning replication
available at headquarters at all times. You can create a replication policy to copy
data from the remote storage pool to the headquarters storage pool at regular
intervals, such as nightly.
The data replication process does not copy system data such as data selections
or policies that you configured. You can preserve this system data through storage
pool authority replication. For more information about storage pool authority
replication, see the following:
See “About storage pool authority replication (SPAR)” on page 199.
You can run a replication policy at any time. However, if the policy includes any
data selections that PureDisk has not yet backed up, the replication policy does
not replicate those data selections. A replication policy copies only backed up data
selections.
Additionally, replication jobs and content router rerouting jobs cannot run
simultaneously. If you start a replication job and then start a rerouting job,
PureDisk stops the replication job.
Note: If you experience replication job performance degradation and you have a
high-latency communication network between the two storage pools, you can
possibly improve performance by changing some default TCP/IP settings. For
more information, see "About changing TCP/IP settings to improve replication
job performance" in the PureDisk Best Practices Guide, Chapter 5: Tuning PureDisk.
■ You can replicate between storage pools if each storage pool is at the same
release level.
■ You can replicate between storage pools when the destination storage pool is
at a release level that is higher than the source storage pool. However,
Symantec recommends that you install all your storage pools with the same
PureDisk release.
For example, Symantec supports replication between a source storage pool at
the PureDisk 6.5.x release level and a destination storage pool at the PureDisk
6.6 release level. Symantec does not guarantee data integrity when you replicate
between storage pools with other nonidentical release levels.
■ You cannot replicate between storage pools when the destination storage pool
is at a release level that is lower than the source storage pool.
selections with a special icon when you click Manage > Agent on the destination
storage pool’s Web UI.
Note: You can delete data from a data selection when you run a removal policy.
PureDisk does not remove this data automatically from a replicated data selection.
PureDisk does not replicate delete actions. If you want to keep the source and the
replicated data selection identical, define similar removal policies for both data
selections.
After you create and run a policy to replicate data from a source storage pool to
a destination storage pool, you can do the following:
■ View the replicated data on the destination storage pool.
For more information about how to view replicated data, see the following:
See “Replication jobs” on page 68.
■ Use restore functions to copy the replicated data to a client that is attached
to the destination storage pool.
For more information about how to copy replicated data, see the following:
See “Copying replicated data to clients on the destination storage pool”
on page 70.
■ Restore the replicated data back to the original client or another client on the
source storage pool.
For more information about how to restore replicated data, see the following:
See “Restoring replicated data back to clients on the source storage pool”
on page 70.
U_*.jpg_files
W_*.jpg_files
W_*.xls_files
If you type *files in the Data selection name field, the policy backs up
all three data selections.
If you type W* in the field, the data selections for this policy include only
the data selections that are named W_*.jpg_files and W_*.xls_files.
For more information on filtering, see the following:
See the PureDisk Backup Operator’s Guide.
66 Data replication
Creating or editing a data replication policy
Replication jobs
PureDisk runs replication jobs on the source storage pool.
PureDisk creates virtual agents on the destination (remote) storage pool when
you implement replication. PureDisk runs the following types of jobs on virtual
agents:
■ Imports of forwarded data during replication
■ Data removal
■ Maintenance
Tuning replication
PureDisk includes configuration parameters that you can manipulate in order to
tune replication performance. For information about tuning, see the following:
See “Tuning replication performance” on page 337.
72 Data replication
Tuning replication
Chapter 4
Exporting data to
NetBackup
This chapter includes the following topics:
After you export the PureDisk files to NetBackup, you can treat these files as if
they were native NetBackup files. From the NetBackup administration console,
you can generate NetBackup reports, browse the files, and manage the files.
To restore the data that you exported to NetBackup, use the NetBackup procedures
that are described in the NetBackup administration guides.
The following provide an overview of the NetBackup export engine:
■ See “Export limitations” on page 74.
■ See “Requirements for exporting data to NetBackup” on page 74.
■ See “Requirements for restoring data from NetBackup” on page 75.
■ See “Enabling and using the NetBackup export engine” on page 75.
Export limitations
The PureDisk NetBackup export engine lets you export backed up PureDisk Files
and Folders data selections to NetBackup. The NetBackup export engine does not
export other PureDisk data selection types.
When you choose data selections for export, a tree structure appears in the Web
UI , and you make your selection from the tree. Be aware that PureDisk exports
only the Files and Folders data selections in the tree. For example, if you select a
storage pool that has many types of data selections, PureDisk exports only the
Files and Folders data selections. In addition, if you choose to export replicated
data from a target storage pool, PureDisk exports only the replicated Files and
Folders data selections.
You can configure the required software on its own dedicated node, or you can
configure this software on a node with other PureDisk services.
Figure 4-1 shows the software that you need to configure to enable PureDisk
exports to NetBackup.
Exporting data to NetBackup 77
Configuring PureDisk and NetBackup for export capability
4 5
Label Object
2 NetBackup environment
Both node_1 and node_3 host a PureDisk NetBackup export engine and the
NetBackup client software. In this storage pool, node_3 can be a low-end computer
because it only serves to transfer data. If you had an all-in-one PureDisk
environment, you would have to install the NetBackup client on that one node.
The figure shows two clients: kwiek and speedy. The NetBackup export engine
on node_1 exports data from kwiek. The NetBackup export engine on node_3
exports data from speedy.
To perform a direct restore of files from NetBackup to speedy, install the
NetBackup client software on speedy. Configure the PureDisk environment first,
and then configure NetBackup.
To configure PureDisk and NetBackup to export PureDisk data selections
◆ Complete the following procedures:
■ See “Configuring NetBackup to receive data exported from PureDisk ”
on page 79.
■ See “Configuring PureDisk to export data to NetBackup” on page 84.
Exporting data to NetBackup 79
Configuring PureDisk and NetBackup for export capability
Ignore this message. A later step in this procedure starts the xinetd daemon.
For information about how to install the NetBackup client, see the following:
See the NetBackup Installation Guide for UNIX and Linux.
2 (Conditional) For each PureDisk node, create a file for the host FQDN and
another file for the service FQDN in the altnames directory on the NetBackup
master server.
Perform this step if the storage pool you want to back up is clustered.
This step is needed because the bp.conf file on each node contains the
physical host address. However, the backup process and the restore process
use the service address.
If necessary, create the altnames directory itself. Within the altnames
directory, use the touch(1) command to create a file for each node's host
FQDN and each node's service FQDN.
Example 1. To create the altnames directory on a UNIX master server, type
the following command:
# mkdir /usr/openv/netbackup/db/altnames
Example 2. Assume that you want to create file names in the altnames
directory of a UNIX NetBackup master server for the nodes in the following
two-node cluster:
■ Node 1 = allinone.acme.com (host FQDN) and allinones.acme.com
(service FQDN)
■ Node 2 = passive.acme.com (host FQDN) and passives.acme.com (service
FQDN)
80 Exporting data to NetBackup
Configuring PureDisk and NetBackup for export capability
For the all-in-one node (node 1), type the following commands on the master
server to create the correct files in the altnames directory:
# touch allinone.acme.com
# touch allinones.acme.com
For the passive node (node 2), type the following command on the master
server to create the correct files in the altnames directory:
# touch passive.acme.com
# touch passives.acme.com
For more information about the altnames directory and creating files inside
the altnames directory, see the following:
See the NetBackup Administrator’s Guide, Volume I.
3 Determine if NetBackup access control (NBAC) is enabled in your NetBackup
environment.
One way to tell if NBAC is enabled is to examine the bp.conf file. If the
USE_VXSS = AUTOMATIC or USE_VXSS = REQUIRED appear, then NBAC is
enabled.
■ If NBAC is enabled, proceed to step 4.
For more information about NBAC, see the NetBackup Security and
Encryption Guide.
■ If NBAC is not enabled, proceed to step 6.
bpnbat -addmachine
Exporting data to NetBackup 81
Configuring PureDisk and NetBackup for export capability
The bpnbat command prompts you for the machine name, prompts you
to create an NBAC password, and prompts you to confirm the password.
For the machine name, type the FQDN of the PureDisk node.
■ Change to the directory where the bpnbaz command resides.
On UNIX master servers, the bpnbaz command resides in
/usr/openv/netbackup/bin/admincmd. On Windows master servers, the
bpnbaz command resides in install_path\NetBackup\bin\admincmd.
■ Repeat the preceding bulleted steps for each PureDisk node that hosts a
NetBackup client.
For example, type the following commands on a UNIX master server:
masterserver# cd /usr/openv/netbackup/bin
masterserver# bpnbat -addmachine
Machine Name: potato.idaho.com
Password: *****
Password: *****
Operation completed successfully.
masterserver# cd admincmd
masterserver# bpnbaz -allowauthorization potato.idaho.com
Operation completed successfully.
5 (Conditional) On the PureDisk node that hosts the NetBackup client software,
install VxAT.
Perform this step if NBAC is enabled in your NetBackup environment.
■ Log into the PureDisk node as root.
■ Type the following commands to create NBAC credentials for the node:
puredisknode# cd /usr/openv/netbackup/bin/admincmd
puredisknode# bpnbat -loginmachine
puredisknode# cd /usr/openv/NetBackup/bin
puredisknode# bpnbat -loginmachine
Does this machine use Dynamic Host Configuration Protocol (DCHP)? (y/n)? n
Authentication Broker: colonel.flagg.com
Authentication port [Enter = default]:
Machine Name: potato.idaho.com
Password: *****
Operation completed successfully.
Exporting data to NetBackup 83
Configuring PureDisk and NetBackup for export capability
6 On each PureDisk node that you want to configure, make sure that the xinetd
daemon is running.
Enter the following command to determine if xinetd is running:
# /etc/init.d/xinetd start
To ensure that the xinetd daemon starts after you restart the system, type
the following command:
# chkconfig xinetd on
of the physical host for these nodes. These are the FQDNs of the physical
nodes. Do not specify the service FQDNs.
/Storage/var/NbuExportClientNameChanges.txt
my_agent_name_is_strider
Example 2: Assume that you have two clients with the following names:
my agent_name*is strider
To avoid duplication, PureDisk adds a counter to the end of the second name
it encounters and transforms the names as follows:
my_agent_name_is_strider
my_agent_name_is_strider_2
3 Make sure that one or more data selections that you want to export have been
created and backed up.
If no backed up data selections exist in the storage pool, use the instructions
in the PureDisk Backup Operator's Guide to create one or more data selections
and back them up to the storage pool.
4 Create a PureDisk policy to export data to NetBackup.
For information about how to create an export to NetBackup policy, see the
following:
See “Creating or editing an export to NetBackup policy” on page 85.
■ You can create multiple PureDisk export policies for a single NetBackup export
engine. PureDisk runs one export job per export policy at a time.
■ If you have two or more PureDisk export policies, these policies can send data
to the same NetBackup DataStore policy. However, you are not limited to only
one NetBackup DataStore policy. You can have multiple NetBackup DataStore
policies.
■ PureDisk can run multiple export jobs simultaneously from multiple NetBackup
export engines if the data originated on two or more PureDisk clients. However,
if the export jobs work with data that originated from a single PureDisk client,
PureDisk runs the jobs one at a time.
Use the following procedure to create a PureDisk policy that can export Files and
Folders data selections to NetBackup.
To create an Export to NetBackup policy
1 Click Manage > Policies.
2 In the left pane, under Data Management Policies, click Export to NetBackup.
3 Complete one of the following steps:
■ To create a policy, in the right pane click Create Policy.
■ To edit a policy, expand Export to NetBackup and click a policy name.
U_*.jpg_files
W_*.jpg_files
W_*.xls_files
If you type *files in the Data selection name field, the policy backs up
all three data selections.
If you type W* in the field, the data selections for this policy include only
the data selections that are named W_*.jpg_files and W_*.xls_files.
For more information about filtering, see the following:
See the PureDisk Backup Operator’s Guide.
■ (Conditional) Select a template from the Data selections based on template
drop-down box to specify an existing data selection template to use.
Exporting data to NetBackup 89
Creating or editing an export to NetBackup policy
This template applies to Files and Folders data selections only. The data
selection uses the data selection rules from the template. These rules
determine the files and directories to back up.
You can select any data selection that you previously applied to the client.
Caution: The filters in the Metadata tab of an Export to NetBackup policy let you
narrow the list of files that you want PureDisk to export. If you do not define
filters, PureDisk exports all the files. When you specify filters in an Export to
NetBackup policy, you might encounter occasional problems when you browse
files in NetBackup Backup and Restore interface. The problems can occur with
the NetBackup images that PureDisk creates. If you do not find your exported
files when you use NetBackup's Backup and Restore interface, use the bplist(1M)
NetBackup command line utility.
NetBackup Policy Name Specify the name of the NetBackup DataStore policy.
Caution: If the export job finds no backups that match the date specified, the
job runs and shows successful completion, but PureDisk exports nothing to
NetBackup. This behavior differs from the behavior for regular backups
because regular backups fail if nothing is backed up.
To avoid this situation, ensure that each server agent on each node that hosts a
NetBackup export engine is activated.
To activate a server agent
1 Click Settings > Topology.
2 Expand the tree in the left pane so that it shows all the PureDisk services.
3 Select the service you want to start.
For example, select NBU Export Engine.
4 In the right pane, click Activate NetBackup Export Engine.
Example 2. If you replicate a data selection and then export that data selection
from the destination storage pool, the destination storage pool displays the source
client's name. The source client's name appears in the following format:
[R] client_name (agx,stpy)
# mkdir restoredir
4 Use a network method to move the files from the PureDisk node with the
NetBackup client software to the PureDisk client that needs the files.
For example, you can use FTP to transfer the file.
This step writes the files to the client, but it does not put the files under
PureDisk control. Perform the next step if you want to use PureDisk to back
up the files again, which puts them under PureDisk control.
5 (Optional) Use PureDisk to back up the files.
This step puts the files back into the PureDisk environment.
More information on how to perform a backup is available.
See the PureDisk Backup Operator’s Guide.
Note: To recover PureDisk when you have enabled the PureDisk deduplication
option (PDDO), see the PureDisk Deduplication Option Guide. It contains
PDDO-specific information, which includes how to avoid a potential data loss
situation.
When the disaster recovery backup policy runs, it preserves all data that you need
to restore a PureDisk environment in the event of a disaster. A disaster recovery
backup ensures that you can return your environment to its previous state.
The following processes occur when the disaster recovery policy runs:
■ It backs up the metadata in the storage pool.
This backup includes the following data:
■ Storage pool authority database
■ The database(s) for the metabase engines
■ The topology files
Note: Make sure that the NetBackup client software version number is the
same as the NetBackup environment version number.
For more information about how to install the software, see the following:
See “Configuring the NetBackup client software” on page 102.
If you accept the short name during the install, edit the
/usr/openv/netbackup/bp.conf file on each node and change the line that
identifies the client. For example:
Disaster recovery backup procedures 103
About backing up your PureDisk environment using NetBackup
CLIENT_NAME=my_pdnode.my_domain.com
2 (Conditional) For each PureDisk node, create a file for the host FQDN and
another file for the service FQDN in the altnames directory on the NetBackup
master server.
Perform this step if the storage pool you want to back up is clustered.
This step is needed because the bp.conf file on each node contains the
physical host address. However, the backup process and the restore process
use the service address.
If necessary, create the altnames directory itself. Within the directory, create
a file of the following format for each node:
xxxxnode1.symc.be
Create these files for each node in the PureDisk storage pool.
Example 1.
To create the altnames directory on a UNIX master server, type the following
command:
# mkdir /usr/openv/netbackup/db/altnames
Example 2.
Assume that you want to create file names in the altnames directory for the
nodes in the following two-node cluster:
■ Node 1 = allinone.acme.com (host FQDN) and allinones.acme.com
(service FQDN)
■ Node 2 = passive.acme.com (host FQDN) and passives.acme.com (service
FQDN)
To create a file in the altnames directory of a UNIX master server, you type
the following commands:
# touch allinone.acme.com
# touch allinones.acme.com
# touch passive.acme.com
# touch passives.acme.com
104 Disaster recovery backup procedures
About backing up your PureDisk environment using NetBackup
For information about the altnames directory and creating files inside the
altnames directory, see the NetBackup Administrator’s Guide, Volume I.
# /etc/init.d/xinetd start
If you restart the system, type the following command to ensure that the
xinetd daemon starts:
# /sbin/insserv /etc/init.d/xinetd
3 (Optional) Select times in the Escalate warning after or the Escalate error
and terminate after drop-down lists.
These times specify the elapsed time before PureDisk sends an email message.
PureDisk can notify you if a backup does not complete within a specified time.
These fields allow you to define the times for escalation actions.
For example, you can configure PureDisk to send an email message to an
administrator if the policy does not complete in eight hours.
Disaster recovery backup procedures 109
Configuring PureDisk disaster recovery backup policies
Note:
Choose only one method to back up your PureDisk data (NetBackup, Samba Share,
or Third Party Product). Then, complete all of the information fields for that
method. Do not complete any fields for the methods that you do not choose.
■ incremental_DR_backup.sh
4 In the Directory Path Name field, specify the full path (mount point) to a
directory in which to write the backed up files.
Specify /DRdata in this field if the following are both true:
■ You used the full_DR_backup.sh script or the incremental_DR_backup.sh
script.
■ You did not modify the scripts.
These scripts write to /DRdata. The write occurs even if the directory is
mounted to another disk or partition. The script mounts to the directory
you specify and writes to it.
Specify your own directory in this field if either of the following are true:
■ You do not use the full_DR_backup.sh script or the
incremental_DR_backup.sh script.
■ You modified the scripts to write to a different directory. Make sure that
your backup scripts write to the directory you specify. PureDisk does not
mount this directory.
112 Disaster recovery backup procedures
Configuring PureDisk disaster recovery backup policies
5 In the Share Name field, specify the name of a remote Samba shared file
system.
Use the following format for the shared file system:
//hostname/sharename
hostname Specify the host name or IP address upon which the target
shared directory resides.
Caution: If you choose this method, be aware that you need to copy your backups
to a secondary host. If the primary host fails, you are likely to lose both the original
files and the backed up files that are written to the local directory.
■ incremental_DR_backup.sh
3 In the Directory Path Name field, specify the full path to the directory in
which to write the backed up files.
If you modified or did not use the full_DR_backup.sh script or the
incremental_DR_backup.sh script, specify your own directory in this field .
/opt/pdconfigure/scripts/support/DR_BackupSampleScripts/
Disaster recovery backup procedures 115
About backing up your PureDisk environment using scripts
Table 5-1 lists the scripts that are located in this directory and describes their
functions.
If you modify the scripts that PureDisk provides, the scripts are not protected.
During a restore procedure, PureDisk overwrites the scripts if they remain in the
default installation directory (/opt). You must place them in another directory
for protection (for example, in /usr or /tmp).
Do not create actual files for listfile or spool_files. The disaster recovery
workflow creates these files and provides them to the script.
If you run an incremental backup, and no full backup exists, PureDisk
performs a full backup.
The disaster recovery backup policy calls the script and runs it each time
with a different option, in the following order:
2 Copy the script you created to every content router in your environment.
Write this script to the same location on each content router. For example,
/opt/external_scripts.
To enable the pdkeyutil command, enter the following command on all active
nodes:
118 Disaster recovery backup procedures
Troubleshooting a disaster recovery backup
# /opt/pdag/bin/pdkeyutil -insert
The preceding command initiates a dialog session with the pdkeyutil utility. The
utility prompts you to specify a password for the encryption utility to use during
disaster recovery backups and restores.
Remember the password that you type. You need this password to restore PureDisk
storage pool authority configuration files in the event of a disaster.
If you do not remember this password, you cannot complete the restore.
When you perform a restore, make sure to use the password that was in effect
when your disaster recovery backup ran. If you have changed the password, ensure
that the password you specify is the same password that was used when the
disaster recovery backup ran.
To determine if the key is enabled, enter the following command:
# /opt/pdag/bin/pdkeyutil -display
# /opt/pdcr/bin/crcontrol --getmode
Disaster recovery backup procedures 119
Troubleshooting a disaster recovery backup
If the crcontrol command output for your content router is not set correctly,
type the following command to set one or more modes manually:
For mode, type one of the following: GET, PUT, DEREF, SYSTEM, or STORAGED.
For example, if DEREF=Hold in your output, type the following command:
/Storage/log/pddb/postgresql.log
/opt/pdinstall/DR_Restore_all.sh
■ Optimizes the content router restore. This action occurs when the disaster
affected only a subset of the content routers.
The DR_Restore_all.sh script fully restores the /Storage/data directory of
all failed content router nodes.
If the configuration includes any content routers that do not need to be fully
recovered because no disaster occurred, the script performs minimal restores.
The restores bring the content routers to a state that is consistent with the
point in time of the last disaster recovery backup. Since the last backup was
done, some data segments might have been removed or added.
In these cases, the script does the following:
■ Restores all databases and configuration files.
■ Restores the segment containers so that they are consistent with the content
router databases.
■ Restores all segment containers that a removal job has changed or deleted
since last the backup.
To perform a disaster recovery of an unclustered storage pool
1 Reinstall the software on your storage pool.
Perform the following procedure:
■ See “Reinstalling required software (unclustered recovery)” on page 123.
2 Perform one of the following procedures, depending on the way you backed
up your PureDisk environment.
■ See “Performing a disaster recovery of an unclustered PureDisk storage
pool from a NetBackup disaster recovery backup (NetBackup, unclustered
recovery)” on page 133.
■ See “Performing a disaster recovery from a Samba backup (Samba,
unclustered recovery)” on page 138.
■ See “Performing a disaster recovery from a third-party product backup
(third-party, unclustered recovery)” on page 147.
Reinstalling PDOS
For each node that failed, install PDOS and any PDOS updates you installed onto
the PDOS base release. The following procedure explains how to reinstall PDOS.
To reinstall PDOS on the nodes that failed
1 Install PDOS on each failed node.
Use the installation instructions in the PureDisk Storage Pool Installation
Guide.
2 (Conditional) Install PDOS updates on each failed node.
Perform this step if you installed any PDOS updates on the nodes before the
disaster.
Use the installation instructions in the update README file.
3 (Conditional) Perform intermediate installation tasks.
Perform this step if the node has special requirements. For example, if you
need to disable multipathing or if you need to configure iSCSI disks for this
node, perform the additional steps in the chapter called 'Preparing to configure
the storage pool' in the PureDisk Storage Pool Configuration Guide.
4 Proceed to one of the following, depending on the disks attached to this node:
■ See “(Conditional) Reconfiguring the storage partitions on DAS/SAN disks”
on page 125.
Disaster recovery for unclustered storage pools 125
Reinstalling required software (unclustered recovery)
# yast
Type yast or YaST to start the interface. Do not type other combinations of
uppercase and lowercase letters.
2 In the YaST Control Center main page, select System > Partitioner.
3 Select Yes on the warning pop-up.
4 On the Expert Partitioner page, select VxVM.
5 On the Create a Disk Group pop-up, type your site-specific name for the disk
group or accept the default.
6 Click OK.
126 Disaster recovery for unclustered storage pools
Reinstalling required software (unclustered recovery)
7 On the Veritas Volume Manager: Disks Setup page, select a disk that you
want to include in the disk group.
8 Select Add Disk and press Return.
You can only add disks that are not yet partitioned. If you try to add a disk
with partitions, adding the disk to the disk group does not succeed. Delete all
partitions from the disk before you try to add partitions.
To delete all partitions on a disk, select Expert in the YaST interface and
select Delete Partition Table and Disk Label.
9 Repeat the following steps for all the disks that you want to include in the
disk group:
■ Step 7
■ Step 8
10 Click Next.
11 Proceed to the following topic:
See “Configuring the /Storage partition (DAS/SAN disks)” on page 126.
6 Click Next.
7 Proceed to one of the following topics:
■ If you want to configure a /Storage/data or a /Storage/databases
partition to enhance performance, proceed to the following:
See “(Optional) Configuring the /Storage/data and the /Storage/databases
partition (DAS/SAN disks)” on page 127.
For more information, see the PureDisk Storage Pool Installation Guide.
■ In the Mount Point field, type /Storage/data. You must type this name
because it is not in the drop-down list.
■ Click OK.
3 Click Next.
4 Proceed to the following topic:
See “Completing the storage configuration (DAS/SAN disks)” on page 128.
4 Select Finish.
5 Select Quit.
# yast
10 Select Next.
11 Proceed to the following topic:
See “Configuring the /Storage partition (iSCSI disks)” on page 130.
3 Select Next.
4 Proceed to the following:
See “Completing the storage configuration (iSCSI disks)” on page 132.
For upgrade_tar_file, specify the full path to the location of the latest update
or patch that your PureDisk environment was running. For example:
2 (Conditional) Install the NetBackup client software on all nodes that failed.
Perform this step if you write your disaster recovery backups to a NetBackup
environment.
3 Proceed to one of the following:
■ See “Performing a disaster recovery of an unclustered PureDisk storage
pool from a NetBackup disaster recovery backup (NetBackup, unclustered
recovery)” on page 133.
■ See “Performing a disaster recovery from a Samba backup (Samba,
unclustered recovery)” on page 138.
■ See “Performing a disaster recovery from a third-party product backup
(third-party, unclustered recovery)” on page 147.
2 Run the disaster recovery script from the storage pool authority node.
From the PDOS command line, type the following command:
# /opt/pdinstall/DR_Restore_all.sh
Disaster recovery for unclustered storage pools 135
Performing a disaster recovery of an unclustered PureDisk storage pool from a NetBackup disaster recovery backup
(NetBackup, unclustered recovery)
5 Provide the full path of any upgrade patch files that need to be applied.
For example:
/root/NB_PDE_6.5.1.17630.tar
Answer yes (y) to apply additional patches. Answer no (n) to continue with
the disaster recovery process.
6 Respond to the prompts regarding the topology file.
Directory /Storage/etc must contain the following files:
136 Disaster recovery for unclustered storage pools
Performing a disaster recovery of an unclustered PureDisk storage pool from a NetBackup disaster recovery backup
(NetBackup, unclustered recovery)
■ topology.ini or topology.ini.enc
■ topology_nodes.ini
If these files are not present, the script retrieves them. If topology.ini.enc
is present, the script issues the following prompt for the password:
Type the password that you use for the storage pool configuration wizard.
7 Examine the topology information the script displays and specify the nodes
you want to restore.
The script reads the topology file and presents a display like the following
example:
The preceding example shows the topology that the script can restore in this
disaster recovery operation. Examine this information for accuracy and
specify the node numbers that you want to restore. If you want to restore
more than one node, use commas to separate the node numbers.
If a node did not fail and you want to preserve the data on that node, do not
specify that node number. The restore procedure completely reinstalls the
whole topology. However, for the nodes that did not fail, the script hides
everything in /Storage by unmounting the mount points before it removes
the data.
After the reinstall is complete, the script performs the following actions:
■ Remounts those mount points
■ Restores all the data on the failed nodes
■ Restores all the databases on all the nodes
■ Restores any removed data. A data removal job might have been run since
the last time the databases were backed up. For this reason, the script also
restores the removed data on the nodes that did not fail. This action
synchronizes the databases and data.
Disaster recovery for unclustered storage pools 137
Performing a disaster recovery of an unclustered PureDisk storage pool from a NetBackup disaster recovery backup
(NetBackup, unclustered recovery)
For example:
Type the password that you use for the storage pool configuration wizard.
11 Observe the completion message.
When the operation completes successfully, the script displays the following
message:
12 (Conditional) Run the following script on the storage pool authority node to
upgrade the security protocol:
# /opt/pdinstall/disable_sslv2.sh
Perform this step if you ran the disable_sslv2.sh script on this storage pool
at any time. The disaster recovery restore does not enable this script
automatically.
Symantec recommends that you run the script unless PureDisk 6.5.x storage
pools need to replicate to this storage pool.
138 Disaster recovery for unclustered storage pools
Performing a disaster recovery from a Samba backup (Samba, unclustered recovery)
You need your storage pool's topology information in order to perform the restore.
Perform one of the following procedures:
■ See “(Conditional) Recreating the topology with current topology information
(Samba, unclustered recovery)” on page 139.
■ See “(Conditional) Recreating the topology without current topology
information (Samba, unclustered recovery)” on page 139.
# /opt/pdinstall/edit_topology.sh
Your goal is to recreate the PureDisk topology so that it matches the topology
that existed before the disaster.
For information about the storage pool’s topology and node identification
information, see the worksheets that you completed during this storage pool’s
installation.
# /opt/pdinstall/edit_topology.sh
Note: Make sure you enter the correct storage pool ID. Make sure that all
passwords you use during the disaster recovery process are the same as those
that existed before the disaster occurred.
# /opt/pdinstall/install_newStoragePool.sh
/Storage/etc/topology.ini
/Storage/etc/topology_nodes.ini
The preceding files are not needed at this time. The disaster recovery script
restores these files from the backup.
2 Run the disaster recovery script from the storage pool authority node.
From the PDOS command line, type the following command:
# /opt/pdinstall/DR_Restore_all.sh
142 Disaster recovery for unclustered storage pools
Performing a disaster recovery from a Samba backup (Samba, unclustered recovery)
4 Provide the information that PureDisk needs to mount the shared file system.
For example:
7 Type the full system path name to the disaster recovery script used to save
your PureDisk data.
If you used the DR_Restore_all script in the default PureDisk location, press
return.
If you supplied your own restore script, PureDisk does not protect it. The
scripts are overwritten during a restore procedure if they remain in the default
installation directory (/opt). You must write them to another directory for
protection (for example, in /usr or /tmp).
For example:
9 Provide the full path of any upgrade patch files that need to be applied.
For example:
/root/NB_PDE_6.6.1.17630.tar
Answer yes (y) to apply additional patches. Answer no (n) to continue with
the disaster recovery process.
10 Type the storage pool ID for the storage pool that you want to restore.
This ID is the value specified for the storagepoolid property in the
topology.ini file. This value is used to retrieve the topology file.
For example:
■ topology_nodes.ini
If these files are not present, the script retrieves them. If topology.ini.enc
is present, the script issues the following prompt for the password:
Type the password that you use for the storage pool configuration wizard.
12 Examine the topology information the script displays and specify the nodes
you want to restore.
The script reads the topology file and presents a display like the following
example:
Disaster recovery for unclustered storage pools 145
Performing a disaster recovery from a Samba backup (Samba, unclustered recovery)
The preceding example shows the topology that the script can restore in this
disaster recovery operation. Examine this information for accuracy and
specify the node numbers that you want to restore. If you want to restore
more than one node, use commas to separate the node numbers.
If a node did not fail and you want to preserve the data on that node, do not
specify that node number. The restore procedure completely reinstalls the
whole topology. However, for the nodes that did not fail, the script hides
everything in /Storage by unmounting the mount points before it removes
the data.
After the reinstall is complete, the script performs the following actions:
■ Remounts those mount points
■ Restores all the data on the failed nodes
■ Restores all the databases on all the nodes
■ Restores the removed data. A data removal job might have been run since
the last time the databases were backed up. For this reason, the script also
restores the removed data on the nodes that did not fail. This method
synchronizes the databases and data.
15 When the restore is complete, answer the prompts about encryption of the
topology.ini file.
For example:
Type the password that you use for the storage pool configuration wizard.
16 Observe the completion message.
When the operation completes successfully, the script displays the following
message:
17 (Conditional) Run the following script on the storage pool authority node to
upgrade the security protocol:
# /opt/pdinstall/disable_sslv2.sh
Perform this step if you ran the disable_sslv2.sh script on this storage pool
at any time. The disaster recovery restore does not enable this script
automatically.
Symantec recommends that you run the script unless PureDisk 6.5.x storage
pools need to replicate to this storage pool.
18 Perform a full disaster recovery backup.
Make sure you perform a full disaster recovery backup before you perform
any file backups or perform any incremental disaster recovery backups.
19 (Conditional) Re-enable the NetBackup export engine on any nodes that hosted
only a NetBackup export engine service.
Perform this step only if you have a node that hosted only a NetBackup export
engine service.
For information about how to enable a NetBackup export engine, see the
following:
See “About exporting data to NetBackup” on page 73.
Disaster recovery for unclustered storage pools 147
Performing a disaster recovery from a third-party product backup (third-party, unclustered recovery)
■ You have a backup copy of this storage pool’s topology and you can recreate
it.
To recreate the topology when you have the storage pool’s topology information
◆ Enter the following command and follow the prompts to recreate this storage
pool’s topology:
# /opt/pdinstall/edit_topology.sh
Your goal is to recreate the PureDisk topology so that it matches the topology
that existed before the disaster.
For information about the storage pool’s topology and node identification
information, see the worksheets that you completed during this storage pool’s
installation.
# /opt/pdinstall/edit_topology.sh
Note: Make sure you enter the correct storage pool ID. Make sure that all
passwords you use during the disaster recovery process are the same as those
that existed before the disaster occurred.
# /opt/pdinstall/install_newStoragePool.sh
/Storage/etc/topology.ini
/Storage/etc/topology_nodes.ini
The preceding files are not needed at this time. The disaster recovery script
restores these files from the backup.
2 Run the disaster recovery script from the storage pool authority node.
From the PDOS command line, type the following command:
# /opt/pdinstall/DR_Restore_all.sh
Disaster recovery for unclustered storage pools 151
Performing a disaster recovery from a third-party product backup (third-party, unclustered recovery)
5 Type the full system path name to the disaster recovery script used to save
your PureDisk data.
If you used the DR_Restore_all script in the default PureDisk location, press
return.
If you supplied your own restore script, remember that the scripts are not
protected. The scripts are overwritten during a restore procedure if they
remain in the default installation directory (/opt). To prevent this problem,
you must place them in another directory for protection, such as in /usr or
/tmp.
For example:
7 Provide the full path of any upgrade patch files that need to be applied.
For example:
/root/NB_PDE_6.6.1.17630.tar
Answer yes (y) to apply additional patches. Answer no (n) to continue with
the disaster recovery process.
8 Type the storage pool ID for the storage pool that you want to restore.
This name is the value specified for the storagepoolid property in the
topology.ini file. This value is used to retrieve the topology file.
For example:
■ topology_nodes.ini
If these files are not present, the script retrieves them. If topology.ini.enc
is present, the script issues the following prompt for the password:
Type the password that you use for the storage pool configuration wizard.
10 Examine the topology information the script displays and specify the nodes
you want to restore.
Disaster recovery for unclustered storage pools 153
Performing a disaster recovery from a third-party product backup (third-party, unclustered recovery)
The script reads the topology file and presents a display like the following
example:
The preceding example shows the topology that the script can restore in this
disaster recovery operation. Examine this information for accuracy and
specify the node numbers that you want to restore. If you want to restore
more than one node, use commas to separate the node numbers.
If a node did not fail and you want to preserve the data on that node, do not
specify that node number. The restore procedure completely reinstalls the
whole topology. However, for the nodes that did not fail, the script hides
everything in /Storage by unmounting the mount points before it removes
the data.
After the reinstall is complete, the script performs the following actions:
■ Remounts those mount points
■ Restores all the data on the failed nodes
■ Restores all the databases on all the nodes
■ Restores the removed data. A data removal job might have been run since
the last time the databases were backed up. In that case, the script also
restores the removed data on the nodes that did not fail. This method
synchronizes the databases and data.
For example:
Type the password that you use for the storage pool configuration wizard.
14 Observe the completion message.
When the operation completes successfully, the script displays the following
message:
15 (Conditional) Run the following script on the storage pool authority node to
upgrade the security protocol:
# /opt/pdinstall/disable_sslv2.sh
Perform this step if you ran the disable_sslv2.sh script on this storage pool
at any time. The disaster recovery restore does not enable this script
automatically.
Symantec recommends that you run the script unless PureDisk 6.5.x storage
pools need to replicate to this storage pool.
Disaster recovery for unclustered storage pools 155
Performing a disaster recovery from a third-party product backup (third-party, unclustered recovery)
■ Complete disaster. In this scenario, all or most of the storage pool has
experienced a disaster such as a computer-room flood or fire. You need to
recover multiple nodes.
To perform a disaster recovery of a clustered storage pool
1 Prepare the failed nodes for recovery.
In most cases, no matter what kind of disaster occurred, you need to prepare
the nodes before you run the disaster recovery script (DR_Restore_all.sh).
Perform one of the following procedures, depending on type of disaster that
occurred:
■ See “Recovering from a single-node failover” on page 159.
■ See “Recovering one active node” on page 160.
■ See “Recovering from a data storage corruption” on page 166.
■ See “Recovering from a complete storage pool disaster (clustered, complete
storage pool disaster)” on page 169.
■ When the VCS installer prompts you to specify the nodes on which to
install the software, specify only the failed nodes. Do not install VCS on
the nodes that do not need to be recovered.
■ If you install VCS on only one node, the VCS installer issues a warning.
The warning asks you to confirm that you want to install only a single-node
cluster. Answer y.
■ At the end of the VCS 4.1 MP3 installation, the VCS installer asks you to
specify whether you are ready to configure VCS. Answer n.
6 For each node, type the following command to configure a service address
on the public NIC in the node:
For ip_address, specify the service IP address of the service you want to
configure. For n, specify the number of the public network interface card
(NIC) on this node.
For example, on node1.acme.com, you could type the following command:
Note: Make sure to repeat this step on each active node, including the healthy
nodes. Because you removed the service groups for the entire storage pool,
you need to recreate the service addresses for each node at this time.
For more information about this command, see the Veritas Storage
Foundation documentation.
■ On each active node that you want to restore, type the following command
to start that node's disk volumes:
For more information about this command, see the Veritas Storage
Foundation documentation.
■ Repeat the preceding steps on all nodes. Make sure to import and start
the disk volumes on all nodes in the storage pool, including those that did
not fail.
# mkdir /Storage
As you mount /Storage on each node, make sure that the mount attaches to
a different disk for each PureDisk node. Connect each node to a different
LUN.
4 (Conditional) Remove the existing topology files.
Perform this step if the node you want to recover is the storage pool authority
node.
Remove the following files:
■ topology.ini or topology.ini.enc
164 Disaster recovery for clustered storage pools
Recovering one active node
■ topology_nodes.ini
# mkdir /Storage/data
# mkdir /Storage/databases
/Storage/log/pddb/postgresql.log
Messages such as the following in the log file are possible signs of a corrupted
database:
# /cdrom/puredisk/install.sh --force
2 For each node, type the following command to configure a service address
on the public NIC in the node:
For ip_address, specify the service IP address of the service you want to
configure. For n, specify the number of the public network interface card
(NIC) on this node.
For example, on node1.acme.com, you could type the following command:
Note: Make sure to repeat this step on each active node, including the healthy
nodes. Because you removed the service groups for the entire storage pool,
you need to recreate the service addresses for each node at this time.
Disaster recovery for clustered storage pools 169
Recovering from a complete storage pool disaster (clustered, complete storage pool disaster)
■ /Storage/etc/topology_nodes.ini
2 For each node, type the following command to configure a service address
on the public NIC in the node:
For ip_address, specify the service IP address of the service you want to
configure. For n, specify the number of the public network interface card
(NIC) on this node.
For example, on node1.acme.com, you could type the following command:
Note: Make sure to repeat this step on each active node, including the healthy
nodes. Because you removed the service groups for the entire storage pool,
you need to recreate the service addresses for each node at this time.
# /opt/pdinstall/edit_topology.sh
Your goal is to recreate the PureDisk topology so that it matches the topology
that existed before the disaster.
For information about the storage pool’s topology and node identification
information, see the worksheets that you completed during this storage pool’s
installation.
# /opt/pdinstall/edit_topology.sh
Note: Make sure you enter the correct storage pool ID. Make sure that all
passwords you use during the disaster recovery process are the same as those
that existed before the disaster occurred.
# /opt/pdinstall/install_newStoragePool.sh
/Storage/etc/topology.ini
/Storage/etc/topology_nodes.ini
The preceding files are not needed at this time. The disaster recovery script
restores these files from the backup.
# /opt/pdinstall/DR_Restore_all.sh
7 Provide the full path of any upgrade patch files that need to be applied.
For example:
/root/NB_PDE_6.6.1.17630.tar
For multiple upgrade patches that need to be applied, provide the latest that
can be installed on top of the base version. Otherwise, provide in the order
the patches should be applied (applicable to EEBs).
Leave blank and press Enter if there are no patches to apply.
Next, respond to the following prompt:
Answer yes (y) to apply additional patches. Answer no (n) to continue with
the disaster recovery process.
8 Respond to the prompts regarding the topology file.
Directory /Storage/etc must contain the following files:
■ topology.ini or topology.ini.enc
■ topology_nodes.ini
If these files are not present, the script issues the following prompt:
Please provide the virtual Fully Qualified Domain Name of your SPA:
Enter the service fully qualified domain name (FQDN) for the storage pool
authority (SPA) node. The script retrieves the files from the location you
provide.
If topology.ini.enc is present, the script issues the following prompt for
the password:
Disaster recovery for clustered storage pools 179
Running the DR_Restore_all_script to recover the data
Type the password that you use for the storage pool configuration wizard.
9 Observe the messages that the script produces and take one of the following
actions:
■ If the host and service mappings are synchronized properly with the
topology on the storage pool, the script continues. Proceed to the following
section:
See “Running the DR_Restore_all script - phase 2 (NetBackup, clustered
recovery)” on page 180.
■ If the host and service mappings are not synchronized with the topology
on the storage pool, the script issues the following message and stops:
WARNING: You are running in a VCS environment. This means the topology_nodes.ini file
that has just been restored may be out of date. VCS failover events could have changed
the physical - service address mapping for nodes between the time the DR backup last ran
and now.
To verify these mappings, please run /opt/pdinstall/edit_topology.sh and select option
"Edit a node" to edit all PureDisk nodes and spare nodes in your topology. Verify that
for PureDisk nodes, the service address in the "Virtual IP/Hostname" entry is on the
same node as the physical address in the "IP/Hostname" entry. If not, update the
"IP/Hostname" entry to contain the correct physical address for the service address.
Verify that for spare nodes, the "IP/Hostname" entry is really the physical address of
a node that is currently acting as a spare node.
Also, select the option "Configure root broker" and verify that the root broker mapping
is correct.
Once you verified the physical - service address mapping is correct for all nodes, and
the root broker mapping, please run this script again.
For example, failovers might have occurred between the time of the last
disaster recovery backup and this restore. If so, the restore topology files
have invalid host and service address mappings for the nodes of the storage
pool.
180 Disaster recovery for clustered storage pools
Running the DR_Restore_all_script to recover the data
10 (Conditional) Verify the NIC identifiers and (conditionally) correct the NIC
identifiers.
Perform this step if you reinstalled PDOS on any nodes.
When you reinstall PDOS, the NIC identifiers can be different from the NIC
identifiers that existed in the previous PDOS installation.
Perform the following steps to verify and, if necessary, correct the NIC
identifiers:
■ Log into the storage pool authority node and type the following command
to start the topology editor:
# /opt/pdinstall/edit_topology.sh
Use the topology editor to check and, if necessary, correct the NIC
identifier for the public NIC. The topology editor displays information
about the public NIC below the service addresses. You can change
information about the public NIC in the topology editor.
■ Open file /Storage/etc/topology_nodes.ini.
■ Search for the following keywords: firstprivate and secondprivate.
■ Verify that the ethn identifiers for firstprivate and secondprivate
point to the correct NICs.
■ (Conditional) Correct the ethn identifiers in the topology_nodes.ini file.
Perform this step if the ethn identifiers differ from those that existed
when PDOS was installed initially.
The preceding example shows the topology that the script can restore in this
disaster recovery operation. Examine this information for accuracy, and
specify all nodes for restore. Use commas to separate the node numbers.
After the reinstall is complete, the script performs the following actions:
■ Remounts the mount points.
■ Restores all the data on the failed nodes.
■ Restores all the databases on all the nodes.
■ Restores any removed data. A data removal job might have been run since
the last time the databases were backed up. For this reason, the script also
restores the removed data on the nodes that did not fail. This action
synchronizes the databases and data.
4 When the restore is complete, answer the prompts about encryption of the
topology.ini file.
For example:
Type the password that you use for the storage pool configuration wizard.
5 Observe the completion message.
When the operation completes successfully, the script displays the following
message:
# /opt/pdcr/bin/crcontrol -m DEREF=Yes
3 Repeat the following steps until you have run this command on all content
router nodes:
■ Step 1
■ Step 2
Disaster recovery for clustered storage pools 183
Running the DR_Restore_all_script to recover the data
4 (Conditional) Run the following script on the storage pool authority node to
upgrade the security protocol:
# /opt/pdinstall/disable_sslv2.sh
Perform this step if you ran the disable_sslv2.sh script on this storage pool
at any time. The disaster recovery restore does not enable this script
automatically.
Symantec recommends that you run the script unless PureDisk 6.5.x storage
pools need to replicate to this storage pool.
5 (Conditional) Reenable the NetBackup export engine on any nodes that hosted
only a NetBackup export engine service.
Perform this step only if you have a node that hosted only a NetBackup export
engine service.
For information about how to enable a NetBackup export engine, see the
following:
See “About exporting data to NetBackup” on page 73.
6 Perform a full disaster recovery backup.
Make sure you perform a full disaster recovery backup before you perform
any file backups or perform any incremental disaster recovery backups.
# /opt/pdinstall/DR_Restore_all.sh
6 Provide the information that PureDisk needs to mount the shared file system.
For example:
9 Type the full system path name to the disaster recovery script used to save
your PureDisk data.
If you used the DR_restore_all.sh script in the default PureDisk location,
press Enter.
If you supplied your own restore script, PureDisk does not protect it. The
scripts are overwritten during a restore procedure if they remain in the default
installation directory (/opt). You must write them to another directory for
protection, such as /usr or /tmp.
For example:
11 Provide the full path of any upgrade patch files that need to be applied.
Please provide the location of the upgrade patch tar file.
For multiple patches, enter in the order they should be applied.
(leave blank for none) :
For example:
/root/NB_PDE_6.6.1.17630.tar
For multiple upgrade patches that need to be applied, provide the latest that
can be installed on top of the base version. Otherwise, provide in the order
the patches should be applied (applicable to EEBs).
Leave blank and press Enter if there are no patches to apply.
Next, respond to the following prompt:
Answer yes (y) to apply additional patches. Answer no (n) to continue with
the disaster recovery process.
12 Type the storage pool ID for the storage pool that you want to restore.
This ID is the value specified for the storagepoolid property in the
topology.ini file. This value is used to retrieve the topology file.
For example:
■ topology_nodes.ini
Disaster recovery for clustered storage pools 187
Running the DR_Restore_all_script to recover the data
If these files are not present, the script retrieves them. If topology.ini.enc
is present, the script issues the following prompt for the password:
Type the password that you use for the storage pool configuration wizard.
14 Observe the messages that the script produces and take one of the following
actions:
■ If the host and service mappings are synchronized properly with the
topology on the storage pool, the script continues. Proceed to the following
section:
See “Running the DR_Restore_all script - phase 2 (Samba, clustered
recovery)” on page 188.
■ If the host and service mappings are not synchronized with the topology
on the storage pool, the script issues the following message and stops:
WARNING: You are running in a VCS environment. This means the topology_nodes.ini file
that has just been restored may be out of date. VCS failover events could have changed
the physical - service address mapping for nodes between the time the DR backup last ran
and now.
To verify these mappings, please run /opt/pdinstall/edit_topology.sh and select option
"Edit a node" to edit all PureDisk nodes and spare nodes in your topology. Verify that
for PureDisk nodes, the service address in the "Virtual IP/Hostname" entry is on the
same node as the physical address in the "IP/Hostname" entry. If not, update the
"IP/Hostname" entry to contain the correct physical address for the service address.
Verify that for spare nodes, the "IP/Hostname" entry is really the physical address of
a node that is currently acting as a spare node.
Also, select the option "Configure root broker" and verify that the root broker mapping
is correct.
Once you verified the physical - service address mapping is correct for all nodes, and
the root broker mapping, please run this script again.
For example, failovers might have occurred between the time of the last
disaster recovery backup and this restore. If so, the restore topology files
188 Disaster recovery for clustered storage pools
Running the DR_Restore_all_script to recover the data
have invalid host address and service address mappings for the nodes of
the storage pool.
Correct these inconsistencies and run the DR_Restore_all script again.
15 (Conditional) Verify the NIC identifiers and (conditionally) correct the NIC
identifiers.
Perform this step if you reinstalled PDOS on any nodes.
When you reinstall PDOS, the NIC identifiers can be different from the NIC
identifiers that existed in the previous PDOS installation.
Perform the following steps to verify and, if necessary, correct the NIC
identifiers:
■ Log into the storage pool authority node and type the following command
to start the topology editor:
# /opt/pdinstall/edit_topology.sh
Use the topology editor to check and, if necessary, correct the NIC
identifier for the public NIC. The topology editor displays information
about the public NIC below the service addresses. You can change
information about the public NIC in the topology editor.
■ Open file /Storage/etc/topology_nodes.ini.
■ Search for the following keywords: firstprivate and secondprivate.
■ Verify that the ethn identifiers for firstprivate and secondprivate
point to the correct NICs.
■ (Conditional) Correct the ethn identifiers in the topology_nodes.ini file.
Perform this step if the ethn identifiers differ from those that existed
when PDOS was installed initially.
The preceding example shows the topology that the script can restore in this
disaster recovery operation. Examine this information for accuracy, and
specify all nodes for restore. Use commas to separate the node numbers.
After the reinstall is complete, the script performs the following actions:
■ Remounts the mount points.
■ Restores all the data on the failed nodes.
■ Restores all the databases on all the nodes.
■ Restores the removed data. A data removal job might have been run since
the last time the databases were backed up. For this reason, the script also
restores the removed data on the nodes that did not fail. This method
synchronizes the databases and data.
4 When the restore is complete, answer the prompts about encryption of the
topology.ini file.
For example:
Type the password that you use for the storage pool configuration wizard.
5 Observe the completion message.
When the operation completes successfully, the script displays the following
message:
# /opt/pdcr/bin/crcontrol -m DEREF=Yes
3 Repeat the following steps until you have run this command on all content
router nodes:
■ Step 1
■ Step 2
Disaster recovery for clustered storage pools 191
Running the DR_Restore_all_script to recover the data
4 (Conditional) Run the following script on the storage pool authority node to
upgrade the security protocol:
# /opt/pdinstall/disable_sslv2.sh
Perform this step if you ran the disable_sslv2.sh script on this storage pool
at any time. The disaster recovery restore does not enable this script
automatically.
Symantec recommends that you run the script unless PureDisk 6.5.x storage
pools need to replicate to this storage pool.
5 (Conditional) Reenable the NetBackup export engine on any nodes that hosted
only a NetBackup export engine service.
Perform this step only if you have a node that hosted only a NetBackup export
engine service.
For information about how to enable a NetBackup export engine, see the
following:
See “About exporting data to NetBackup” on page 73.
6 Perform a full disaster recovery backup.
Make sure you perform a full disaster recovery backup before you perform
any file backups or perform any incremental disaster recovery backups.
# /opt/pdinstall/DR_Restore_all.sh
7 Type the full system path name to the disaster recovery script used to save
your PureDisk data.
If you used the DR_Restore_all.sh script in the default PureDisk location,
press Enter.
If you supplied your own restore script, remember that the scripts are not
protected. The scripts are overwritten during a restore procedure if they
remain in the default installation directory (/opt). To prevent this problem,
you must place them in another directory for protection (for example, in /usr
or /tmp).
For example:
9 Provide the full path of any upgrade patch files that need to be applied.
For example:
/root/NB_PDE_6.6.1.17630.tar
For multiple upgrade patches that need to be applied, provide the latest that
can be installed on top of the base version. Otherwise, provide in the order
the patches should be applied (applicable to EEBs).
Leave blank and press Enter if there are no patches to apply.
Next, respond to the following prompt:
Answer yes (y) to apply additional patches. Answer no (n) to continue with
the disaster recovery process.
10 Type the storage pool ID for the storage pool that you want to restore.
This name is the value specified for the storagepoolid property in the
topology.ini file. This value is used to retrieve the topology file.
For example:
■ topology_nodes.ini
If these files are not present, the script retrieves them. If topology.ini.enc
is present, the script issues the following prompt for the password:
Type the password that you use for the storage pool configuration wizard.
12 Observe the messages that the script produces and take one of the following
actions:
Disaster recovery for clustered storage pools 195
Running the DR_Restore_all_script to recover the data
■ If the host and service mappings are synchronized properly with the
topology on the storage pool, the script continues. Proceed to the following
section:
See “Running the DR_Restore_all script - phase 2 (third-party, clustered
recovery)” on page 196.
■ If the host and service mappings are not synchronized with the topology
on the storage pool, the script issues the following message and stops:
WARNING: You are running in a VCS environment. This means the topology_nodes.ini file
that has just been restored may be out of date. VCS failover events could have changed
the physical - service address mapping for nodes between the time the DR backup last ran
and now.
To verify these mappings, please run /opt/pdinstall/edit_topology.sh and select option
"Edit a node" to edit all PureDisk nodes and spare nodes in your topology. Verify that
for PureDisk nodes, the service address in the "Virtual IP/Hostname" entry is on the
same node as the physical address in the "IP/Hostname" entry. If not, update the
"IP/Hostname" entry to contain the correct physical address for the service address.
Verify that for spare nodes, the "IP/Hostname" entry is really the physical address of
a node that is currently acting as a spare node.
Also, select the option "Configure root broker" and verify that the root broker mapping
is correct.
Once you verified the physical - service address mapping is correct for all nodes, and
the root broker mapping, please run this script again.
For example, failovers might have occurred between the time of the last
disaster recovery backup and this restore. If so, the restore topology files
have invalid host and service address mappings for the nodes of the storage
pool.
Correct these inconsistencies and run the DR_Restore_all.sh script again.
13 (Conditional) Verify the NIC identifiers and (conditionally) correct the NIC
identifiers.
Perform this step if you reinstalled PDOS on any nodes.
When you reinstall PDOS, the NIC identifiers can be different from the NIC
identifiers that existed in the previous PDOS installation.
196 Disaster recovery for clustered storage pools
Running the DR_Restore_all_script to recover the data
Perform the following steps to verify and, if necessary, correct the NIC
identifiers:
■ Log into the storage pool authority node and type the following command
to start the topology editor:
# /opt/pdinstall/edit_topology.sh
Use the topology editor to check and, if necessary, correct the NIC
identifier for the public NIC. The topology editor displays information
about the public NIC below the service addresses. You can change
information about the public NIC in the topology editor.
■ Open file /Storage/etc/topology_nodes.ini.
■ Search for the following keywords: firstprivate and secondprivate.
■ Verify that the ethn identifiers for firstprivate and secondprivate
point to the correct NICs.
■ (Conditional) Correct the ethn identifiers in the topology_nodes.ini file.
Perform this step if the ethn identifiers differ from those that existed
when PDOS was installed initially.
The preceding example shows the topology that the script can restore in this
disaster recovery operation. Examine this information for accuracy, and
specify all nodes for restore. Use commas to separate the node numbers.
Disaster recovery for clustered storage pools 197
Running the DR_Restore_all_script to recover the data
After the reinstall is complete, the script performs the following actions:
■ Remounts the mount points.
■ Restores all the data on the failed nodes.
■ Restores all the databases on all the nodes.
■ Restores the removed data. A data removal job might have been run since
the last time the databases were backed up. In that case, the script also
restores the removed data on the nodes that did not fail. This method
synchronizes the databases and data.
For example:
Type the password that you use for the storage pool configuration wizard.
5 Observe the completion message.
When the operation completes successfully, the script displays the following
message:
# /opt/pdcr/bin/crcontrol -m DEREF=Yes
3 Repeat the following steps until you have run this command on all content
router nodes:
■ Step 1
■ Step 2
4 (Conditional) Run the following script on the storage pool authority node to
upgrade the security protocol:
# /opt/pdinstall/disable_sslv2.sh
Perform this step if you ran the disable_sslv2.sh script on this storage pool
at any time. The disaster recovery restore does not enable this script
automatically.
Symantec recommends that you run the script unless PureDisk 6.5.x storage
pools need to replicate to this storage pool.
5 (Conditional) Reenable the NetBackup export engine on any nodes that hosted
only a NetBackup export engine service.
Perform this step only if you have a node that hosted only a NetBackup export
engine service.
For information about how to enable a NetBackup export engine, see the
following:
See “About exporting data to NetBackup” on page 73.
6 Perform a full disaster recovery backup.
Make sure you perform a full disaster recovery backup before you perform
any file backups or perform any incremental disaster recovery backups.
Chapter 8
Storage pool authority
replication (SPAR)
This chapter includes the following topics:
Note: The main storage pool can be configured as a clustered storage pool.
However, Symantec does not support SPAR for clustered local storage pools. When
SPAR runs under cluster control, a failover moves all node functions to a passive
node. However, the failover does not move the SPAR feature that you enabled on
the original local storage pool authority node.
Figure 8-1 shows an example PureDisk environment with two storage pools.
SPAR backup
SP_main
SP_local
PureDisk agent SPAR restore
SP_local is a small, local storage pool in Duluth and SP_main is in a main office
in Minneapolis. SPAR is implemented to back up system information from
SP_local to SP_main. The information in this section uses this example
environment.
SPAR’s main benefit is that SPAR enables you to restart a storage pool and begin
backing up data soon after a disaster.
A SPAR recovery is best performed in the following circumstance:
■ You have an all-in-one local storage pool that is down completely.
■ You want to restore all your user information, data selection definitions, backup
policies, and system policies. This data includes all the user data and storage
pool data that enables client backups. This data does not include the backup
data or backup metadata.
■ You want to be able to start backing up data again very quickly.
SPAR differs from the other disaster recovery methods because SPAR does not
recover your backup data or metadata. A full disaster recovery can take several
hours or days, depending on how much data you backed up. SPAR recoveries are
faster. After a SPAR recovery, PureDisk sees the local storage pool as if it were a
newly configured storage pool. The backups you perform immediately after a
SPAR recovery are all full backups.
When you enable both comprehensive disaster recovery backups and SPAR, you
can choose the recovery method you want to use. If you perform a SPAR recovery,
you can use full disaster recovery methods to restore your file data and metadata
Storage pool authority replication (SPAR) 201
About storage pool authority replication (SPAR)
Note: If you experience replication job performance degradation and you have a
high-latency communication network between the two storage pools, you can
possibly improve performance by changing some default TCP/IP settings. For
more information, see "About changing TCP/IP settings to improve replication
job performance" in the PureDisk Best Practices Guide, Chapter 5: Tuning PureDisk.
Data restored Storage pool metadata, file data, Storage pool metadata.
and file metadata.
Estimated restore Depends on the amount of data. SPAR restores take much less time
time This step can take hours. than a complete disaster recovery.
Storage pool type Any type of storage pool. The protected storage pool must
be an all-in-one, single-node,
unclustered storage pool.
Restore goal You want to restore the storage You want to restore the storage
pool and all backups. pool and back up the clients as
quickly as possible.
State of restored Restores your storage pool to the Restores your storage pool users,
storage pool state it was in when the last accounts, data selections, policies,
disaster recovery backup was run. and all other storage pool
configuration information. This
method does not restore any file
data or file metadata. After you
restore, you need to run backups
for all your clients. Old or changed
data is no longer available.
Login The root user’s login on the main storage pool authority
node.
Host name (FQDN) The fully qualified domain name (FQDN) of the main
storage pool.
Example:
SP_Main.acme.com
Storage pool name The local storage pool’s host name. Type this name as you
want it to appear in the main storage pool’s Web UI.
Binary Location The path to the Linux agent software on the main storage
pool. This agent software is the same as the agent software
that you use for all other Linux PureDisk clients. You can
specify an IP address or a host name to identify the main
storage pool.
In this field, do not specify the actual file in which the Linux
agent installation software resides. You specify the file
name in the next field, Binary.
Example:
/opt/pdweb/htdocs/download/Linux_Clients
https://SP_Main.acme.com/download/Linux_Clients
Binary The name of the file that includes the Linux agent
installation software on the main storage pool. The Binary
Location field’s content points to this file name. For
example, pdagent-Linux_2.6_x86-6.2.0.5.run.
204 Storage pool authority replication (SPAR)
Enabling SPAR backups
Path The path to which you want to install the Linux agent on
the local storage pool. For example, /opt/SPAR.
Caution: Do not specify the path to the primary server
agent on the local storage pool. For example, if the server
agent is in its default location, do not specify /opt in this
field. This path is the location of the primary server agent
on the local storage pool. Do not overwrite this file.
Dump Path The path to a dump directory on the local storage pool.
PureDisk writes the local storage pool’s system information
to this location before it copies the system information to
the main storage pool. Do not specify an existing directory.
Specify only a unique directory that SPAR can use
exclusively.
information in the Scheduling tab to first suspend and later reenable this
policy.
3 Select times in the Escalate warning after or the Escalate error and terminate
after drop-down boxes. These times specify the elapsed time before PureDisk
sends a message.
PureDisk can notify you if a policy does not complete its run within a specified
time. For example, you can configure PureDisk to send an email message to
an administrator if a policy does not complete in an hour.
If you select either of these options, create a policy escalation action. The
action defines the email message, defines its recipients, and associates the
escalation action with the policy. For more information about policy
escalations, see the PureDisk Backup Operator's Guide.
2 Install the PureDisk Operating System (PDOS) on the local storage pool.
For more information about how to install PDOS, see the PureDisk Storage
Pool Installation Guide.
3 Use the storage pool configuration wizard to configure the PureDisk storage
pool.
Configure the storage pool software on the local storage pool. For more
information about how to use the storage pool configuration wizard, see the
PureDisk Storage Pool Installation Guide
Perform the following steps if the storage pool software does not function:
■ Remove the storage pool software’s previous upgrade package.
The following is the directory you need to remove:
/etc/puredisk
storage pool ID, the storage pool becomes inoperable after you perform
the SPAR restore.
When you configure the new storage pool software, specify the same
passwords that you specified during the previous configuration.
4 Deactivate the agent in the main storage pool that performed the SPAR.
Perform the following steps:
■ Log in to the main storage pool’s Web UI.
■ In the left pane, click Settings > Topology.
■ Select the PureDisk agent that represents the PureDisk storage pool.
■ In the right pane, click Deactivate Agent.
Argument Meaning
--agentlocation Full path to the directory in which the agent resided for the
previous SPAR backup. You specified this information in the
Path field when you configured SPAR.
210 Storage pool authority replication (SPAR)
Restoring from a SPAR backup
Argument Meaning
--binary File name for agent installer on the main storage pool. You
specified this information in the Binary field when you
configured SPAR.
--binaryloc Path to the agent installer on the main storage pool. Do not
include the file name at the end of this path. You specified
this information in the Binary Location field when you
configured SPAR.
2 In the left pane, click the plus sign (+) to the left of the
local storage pool’s agent icon.
--dumpdir Full path to the restore directory. Specify the same dump
directory that you used for the previous SPAR backup.
--hostname Host name of the local storage pool as it appeared in the main
storage pool’s Web UI for the previous SPAR backup.
Parameter Argument
■ About reports
About reports
The following explain how to run and display PureDisk reports:
■ See “Permissions and guidelines for running and viewing reports” on page 214.
■ See “Reports for a running job” on page 215.
■ See “About policies and workflows” on page 216.
214 Reports
Permissions and guidelines for running and viewing reports
For more information about permissions, see the PureDisk Client Installation
Guide.
Types of workflows
A workflow is a collection of steps that PureDisk completes to accomplish a task.
A policy is a special kind of workflow. To create a policy manually, or to edit a
policy, click Manage > Policies. The PureDisk Web UI categorizes policies and
workflows as follows:
■ Backup Policies
■ Data Management Policies
■ Storage Pool Management Policies
■ Restore Workflows
■ Miscellaneous Workflows
Reports 217
About policies and workflows
If you upgraded from a previous PureDisk release, you might also see the Legacy
Workflows category with one or more workflows beneath it. For example, this
category might contain the following workflows:
■ 6.5 Data selection removal workflow
■ 6.5 Rerouting Workflow
■ 6.5 MBDataMining workflow
Whether the Web UI displays any legacy workflows depends on the presence of
existing workflows at the time of your upgrade. If you ran a data mining policy
before you applied an upgrade, the workflow appears in the Web UI after the
upgrade is installed. You can examine the outcomes of these workflows, or you
can delete them.
Workflows in policies
A workflow step defines a PureDisk action. PureDisk accomplishes its work by
running a series of workflow steps. The individual workflow steps are predefined,
and each performs a specific action. When you use PureDisk to perform a backup,
a restore, or any other kind of task, PureDisk completes that task by running
several workflows.
A policy defines a data management or maintenance action. Within a backup
policy, for example, the schedule determines when the policy runs, the agents
and the data selections to back up, and other various parameters. PureDisk can
stop processing after a timeout .
A timeout can occur in two different ways:
■ In a workflow step. PureDisk permits internal workflow steps to run only for
a limited time.
■ In a policy. The General tab of a backup policy lets you specify the amount of
time a policy can run before PureDisk terminates the policy run.
PureDisk’s internal watchdog monitors workflow steps. In the case of a backup
policy, the watchdog issues a message if the backup does not complete within the
specified backup window. The watchdog also issues messages for individual
workflow steps or policies that terminate. You can configure event monitoring to
notify you of these occurrences. For more information about how to configure
events, see the PureDisk Backup Operator’s Guide.
218 Reports
Obtaining detailed job reports
General Includes the job’s execution status, whether there were any errors
during the job’s run, and when the job commenced.
Details Shows the status for each specific part of a job’s run. On this tab,
you can see how PureDisk breaks a job apart for processing.
Files Lists the files that the job backed up. Includes whether PureDisk
backed up the files successfully, the client upon which the file
resided, and the name of the file.
Errors Lists the files with the errors that PureDisk encountered when
it processed the job.
■ Delete job
If you see this message, perform the procedure in the following section:
See “Examining lengthy job logs” on page 233.
For efficiency reasons, PureDisk always uploads files smaller than 16 KB to the
content router, even if they are already stored on the content router. Consequently,
the backup statistics can be different from what you expect if you back up many
files smaller than 16 KB. For example, the data reduction factor can be lower than
expected, or the number of bytes transferred can be higher than expected.
Table 9-1 contains information about how to interpret the statistics in a backup
job.
Data Reduction:
Global data reduction The percentage of source data bytes that did not have to be
savings transmitted to the content routers because of data reduction.
Higher numbers correlate to more efficiency.
Global data reduction The total number of bytes for the files that PureDisk backed up
factor divided by the amount of bytes transferred to the content
routers. Higher numbers correlate to more efficiency.
Data Uniqueness:
Unique files and folders The number of backed up files that were globally unique, after
backed up global data reduction, before segmentation, and before
compression.
This statistic is the number of files that are unique in the group
of data selections under consideration. The files themselves are
considered, but optimization through segmentation is not
considered. For example, if a file resides on three different
clients, PureDisk stores the file only once and counts it only
once in this number. At the segment level, however, PureDisk
performs more optimization. A file segment can be present in
more than one file, and PureDisk stores that segment only once.
Unique bytes backed up The total number of bytes in the backed up files that were
globally unique.
Table 9-1 Lines in the Statistics tab for a backup job (continued)
Source selection:
Files selected on source The number of files that meet the data selection inclusion and
exclusion rules. Pertains to regular files only. This number does
not include the number of special files, such as symbolic links
or device special files.
Bytes selected on source The total number of bytes for the files that meet the data
selection inclusion and exclusion rules. Pertains to regular files
only. This number does not include the volume of special files,
such as symbolic links or device special files.
Files new on source The number of selected files that are new compared to the
previous backup run. Pertains to regular files only. This number
does not include the number of special files, such as symbolic
links or device special files.
Bytes new on source The total number of bytes for the selected files that are new
compared to the previous backup run. Pertains to regular files
only. This number does not include the volume of special files,
such as symbolic links or device special files.
Files modified on source The number of selected files that were modified compared to
the previous backup run. Pertains to regular files only. This
number does not include the number of special files, such as
symbolic links or device special files.
Bytes modified on source The total number of bytes for the selected files that were
modified compared to the previous backup run. This number
does not include the volume of special files, such as symbolic
links or device special files.
Files not modified on The number of files that were not modified since the last backup
source ran.
Bytes not modified on The total number of bytes for the files that remained unchanged
source since the last backup ran.
Files deleted on source The number of files that were deleted since the last backup ran.
Bytes deleted on source The total number of bytes for the files that were deleted since
the last backup ran.
Network:
222 Reports
Obtaining detailed job reports
Table 9-1 Lines in the Statistics tab for a backup job (continued)
Backup speed The rate at which PureDisk backed up the total volume of source
data. If only a small amount of unique data needs to be backed
up, this number is higher. If the source data has never been
backed up to PureDisk before, the number is lower.
Bytes transferred The total number of bytes of unique data that were transferred
to the storage pool’s content routers after segmentation and
compression. Includes data related to special files. For special
files, PureDisk stores a special data object on the content routers
to be able to restore these files.
Protected Data:
Source files backed up The number of selected files that were backed up correctly. This
is the sum of new, modified, and nonmodified files that are
correctly backed up and do not contain errors.
Source bytes backed up The total number of bytes for the selected files that were backed
up correctly.
Source files with errors The number of selected files that PureDisk could not back up.
Source bytes with errors The total number of bytes for the selected files that PureDisk
could not back up.
Time:
Start date/time The date and time that the job started.
Stop date/time The date and time that the job ended.
Backup time duration The amount of time that elapsed between when the job started
and when the job ended.
Notes:
■ Table 9-1 shows the statistics for one job. However, the data mining reports,
when run at the storage pool level, show the data reduction factor for the
storage pool. The storage pool data reduction factor in the data mining reports
represents the volume of all data ever backed up to that storage pool, in bytes,
that is retained and currently available for restores versus the amount of bytes
consumed on the content routers.
Reports 223
Obtaining detailed job reports
The storage pool data reduction factor differs from the statistics because the
statistics in the table are generated for only one job.
More information is available about the data mining reports.
See “About Data mining reports” on page 233.
■ Several factors can affect the Bytes transferred statistic. The data selection
may contain a huge number of small files or have very small segment sizes.
In these cases, the bytes transferred can be much larger than the on-source
values.
The following additional information applies to this statistic:
■ If the backup includes only special files, the "...on source" statistics show
0 files selected because there were no regular files to back up, but the Bytes
transferred statistic can be a large number.
■ Compression has the greatest effect on the Bytes transferred statistic. If
you enable compression, the Bytes transferred statistic is usually lower
than the Bytes selected on source statistic. The Bytes transferred statistic
might be higher if the data being transferred cannot be compressed. Data
that cannot be compressed includes data that is already compressed such
as movies, files in JPEG format, files in MP3 format, or files in ZIP format.
For files that are already compressed, the compression is ineffective and
might result in a slight increase of the data to be transferred.
■ For a repeated backup, the Bytes transferred statistic should be much
lower than Bytes selected on source. The Bytes selected on source statistic
is the sum of all bytes present in the data selection. For an initial backup,
if you disable compression, the Bytes transferred statistic is usually higher
than the Bytes selected on source statistic because of the overhead in the
internal data format.
■ The rate of data change on the client affects the Bytes transferred statistic.
The Bytes selected on source represents the sum of all bytes in the entire
data selection. If the data change rate is 100% (for example, if all files
changed or it is a first-time backup) and you disable compression, the Bytes
transferred statistic is always higher. If the change rate is less than 100%,
Bytes transferred statistic is lower.
■ The file size affects the Bytes transferred statistic. As a performance
enhancement, PureDisk always transfers files for which the content of files
is smaller than the segment size. In this case, PureDisk does not perform
a prior-existence check on the content routers. This is in contrast to backups
for file content that is larger than the segment size. PureDisk always
performs a prior-existence check for files that are larger than the segment
size.
224 Reports
Obtaining detailed job reports
Statistic Meaning
Restore Selection:
Total files The total number of files and directories that PureDisk restored.
The Directory count statistic reports the number of directories
restored.
Bytes total The total number of bytes in all files and directories that
PureDisk restored.
Target:
Reports 225
Obtaining detailed job reports
Table 9-2 Lines in the Statistics tab for a restore job (continued)
Statistic Meaning
Files new on target The number of new files that reside on the client after the
restore is complete. If you restore to the original directory and
overwrite the original files, PureDisk reports that there are no
new files. If you restore the files to a different directory for the
first time, PureDisk reports that all the files you restored are
new files on the client.
Bytes new on target The number of bytes occupied by the restored files. This statistic
is the number of bytes consumed by the files that are noted in
the Files new statistic.
Files modified on target The number of files that are different on PureDisk storage when
compared to the target directory for the restore. This number
counts the number of files on the client source that have
different content when compared to the files you restored.
Bytes modified on target The number of bytes occupied by the files in the Files modified
on target statistic.
Files unmodified on The number of files that are identical on both PureDisk storage
target and on the target directory. For example, if this value is 0, this
means that all the files you restored have changed since they
were backed up.
Bytes unmodified on The number of bytes occupied by the files in the Files
target unmodified on target statistic.
Network:
Bytes received by agent The number of bytes actually restored. If nothing has been
replaced, the value is 0.
Average restore rate The average transfer rate during the transmission of unique
data.
Data Uniqueness:
Unique items restored The number of items that PureDisk wrote to the target computer.
This statistic is a count of the number of files, directories, and
special files. It includes only the items that were different on
the target computer as compared to PureDisk storage. If nothing
has changed, this value is 0.
Unique items received Number of unique items that were included in this restore job.
The count excludes directories and special files.
226 Reports
Obtaining detailed job reports
Table 9-2 Lines in the Statistics tab for a restore job (continued)
Statistic Meaning
Restore Failures:
Error count The number of errors that were generated during the restore.
Files with errors The number of files that generated errors during the restore
and could not be restored.
Bytes with errors The total number of bytes represented in files that had an error
and could not be restored. For example, if a 1-MB file could not
be restored due to an error, this statistic is 1 MB.
ACL errors The total number of errors encountered when the job attempted
to restore ACLs. This value can be nonzero for a variety of
reasons. For example, the following conditions, and others, can
cause ACL restore errors:
Verification failures The number of files for which verification failed. This field is
applicable only if you backed up the files with verification
enabled.
Restore Successes:
Directory count The number of all unique directories in the path to each file that
PureDisk restored. Even if you restore only one file from a
directory, PureDisk includes that directory in this statistic
For example, assume that you restore the file1 and file2 from
the following paths:
■ /a/b/c/file1
■ /a/b/d/file2
Devices (Linux and UNIX systems only) The number of block and
character device files restored. This value is always 0 on
Windows systems.
Symbolic links (Linux and UNIX systems only) The number of symbolic links
restored. This value is always 0 on Windows systems.
Hard links (Linux and UNIX systems only) The number of hard links
restored. This value is always 0 on Windows systems.
Reports 227
Obtaining detailed job reports
Table 9-2 Lines in the Statistics tab for a restore job (continued)
Statistic Meaning
Verification successes The number of files for which verification succeeded. This field
is applicable only if you backed up the files with verification
enabled.
Time:
Start date/time The date and time that the job started.
Stop date/time The date and time that the job ended.
Restore time duration The amount of time that elapsed between when the job started
and when the job ended.
Statistic Meaning
Source selection:
Items new in source data The number of data objects replicated to the target storage pool
selection that were not included in a previously replicated PureDisk
backup.
Bytes new in source data The number of bytes replicated to the target storage pool that
selection have not been included in a previous PureDisk backup.
Items modified in source The number of data objects replicated that have been modified
data selection since the previous replication. This number counts the number
of data objects on the source that have different content when
compared to the files you replicated at an earlier time.
Bytes modified in source The number of bytes occupied by the data objects in the Items
data selection modified in source data selection statistic.
Items deleted in source The number of data objects that were deleted from the source
data selection data selection since the last replication.
228 Reports
Obtaining detailed job reports
Table 9-3 Lines in the Statistics tab for a replication job (continued)
Statistic Meaning
Bytes deleted in source The number of bytes occupied by the data objects in the Items
data selection deleted in source data selection statistic. This statistic is the
total number of bytes deleted from the source data selection.
Errors:
Items with replication The number of data objects that generated errors during the
errors replication process.
Bytes with replication The number of bytes of data in the Items with replication
errors errors statistic that generated errors during the replication
process.
Replicated Data:
Items replicated The number of files, directories, or data items replicated to the
target storage pool.
Bytes replicated The number of bytes replicated to the target storage pool.
Network:
Bytes transferred The number of bytes transferred to the target storage pool. This
statistic includes bytes included in any overhead that was needed
for the transfer.
Time:
Start date/time The date and time that the job started.
Stop date/time The date and time that the job ended.
Replication time The amount of time that elapsed between when the job started
duration and when the job ended.
Statistic Meaning
Data
Reduction:
Global data The percentage of source data bytes that did not have to be transmitted to
reduction the content routers because of data reduction. Higher numbers correlate to
saving more efficiency.
Source
Selection:
Bytes The total number of bytes scanned by PDDO from the backup.
scanned
during
backup
Media The percentage of backup data that PureDisk found in the media server's
server cache.
cache hit
percentage
Network:
Bytes The number of bytes of new, nondeduplicated data that PureDisk sent to the
transferred content router for storage.
to content
router
Time:
Backup The amount of time that elapsed between when the job started and when the
time job ended.
duration
If a data lock password is enabled on an agent, this tab prompts you for the
password when you attempt to view it. For more information about the data lock
password, see the PureDisk Client Installation Guide.
The following information appears on this tab:
Agent The name of the agent from which the data selection was backed up.
Folder Specifies the folder that contains the file on the client.
Modified The date and time that the file was last modified. Also see the Enable
change detection backup feature. For more information about specific
backup features, see the PureDisk Backup Operator’s Guide.
This screen contains no information when PureDisk does not back up any files.
This situation is possible for an incremental backup if files have not changed.
Tip: You can restore a file by clicking a Download link in the Download column.
The value of 0 indicates that no limit is set. It does not mean that no data is
transferred.
The following describe how to edit, run, manipulate, and read data from data
mining policies:
■ See “Enabling a data mining policy” on page 234.
■ See “Running a data mining policy manually” on page 236.
■ See “Obtaining data mining policy output - the data mining report” on page 236.
■ See “Obtaining data mining policy output - the Web service report” on page 239.
■ If you select Disabled, PureDisk does not run the policy according to the
schedule in the Scheduling tab. This value is the default.
For example, you can use Disabled if you want to stop running this policy
during a system maintenance period, but you do not want to enter
information in the Scheduling tab to suspend, and then reenable, this
policy.
Total Storage Pool The volume of backup data, in bytes, on the content routers
Volume Used in this storage pool.
Total Storage Pool Data The volume of all data ever backed up to this storage pool,
Reduction Factor in bytes, that is retained and currently available for restores
divided by the global storage pool volume.
Total size on source The volume of files, in bytes, in this data selection on the
source client. This number includes all versions of all files.
Storage pool volume The estimated data volume, in bytes, stored on the storage
used pool’s content routers for this data selection. This statistic
is the source size of this data selection divided by the
storage pool data reduction factor.
238 Reports
Obtaining data mining policy output - the data mining report
A smaller segment size can yield better data reduction rates. However, performance
can degrade because of the higher maintenance costs involved in managing a
larger number of segments.
A larger segment size can yield better performance, but the data reduction rate
can degrade. Larger segments can also use a higher amount of disk space.
PureDisk considers the following factors when it segments the file:
■ The default segment size for the data selection type or the segment size you
specify.
■ The maximum number of segments allowed, which is 5,120 segments.
■ The maximum segment size allowed, which is 50 MB.
https://url/spa/ws/ws_datamining.php?login=login&passwd=pwd&action=getReport&runid=num
Argument Meaning
url The URL for the storage pool authority. For example:
100.100.100.100.
login The storage pool authority administrator login. For example: root.
Table 9-6 Arguments in the data mining Web services reports (continued)
Argument Meaning
num The number of the data mining policy run that you want to display in
report format. PureDisk retains the last 10 runs of the data mining
workflow.
For example, if you want to display the most recent policy run, specify
1. If you want to display information from the policy run just before
the most recent, specify 2. If you ran the data mining policy every day
for the last 10 days and you want to display the oldest run, specify 10.
To verify the report output with data mining policy runs, compare the
timestamp in the header of the report with the times of your data
mining policy runs.
When you run the Web service report to obtain data mining output, you retrieve
information on all data selections in the storage pool. You cannot narrow the
report to include information for only one data selection.
Information about how to report on only one data selection is available.
See “Obtaining data mining policy output - the data mining report” on page 236.
For example, assume that you type the following URL:
https://valhalla.minnesota.com/spa/ws/ws_datamining.php?login=root&passwd=root&action=
getReport&runid=1
This XML file does not appear to have any style information
associated with it. The document tree is shown below.
-<MBDatamining TimeStamp="2007-08-30 03:20:02 PM">
<filtre>*</filtre>
-<mbe_range_statistics>
-<mbe id="1">
-<dataselection id="4" dataselectionname="desktop" agentid="2"
agentname="TRAVELSCRABBLE" locationid="0" departmentid="0"
locationname="Unknown location" departmentname="Unknown department"
ostype="10">
<location name="Unknown location"/>
<department name="Unknown department"/>
<sizeOnSource_dataselection
unit="bytes">153405265</sizeOnSource_dataselection>
<sizeOnStoragePool_dataselection
unit="bytes">4176478208</sizeOnStoragePool_dataselection>
Reports 241
Obtaining data mining policy output - the Web service report
-<ACCESSRANGE>
-<item id="-1 day">
<amountoffiles>13</amountoffiles>
<totalfilesize>33017267</totalfilesize>
</item>
-<item id="1 day-1 week">
<amountoffiles>16</amountoffiles>
<totalfilesize>60604429</totalfilesize>
</item>
-<item id="1 month-1 year">
<amountoffiles>45</amountoffiles>
<totalfilesize>56263445</totalfilesize>
</item>
-<item id="1 week-1 month">
<amountoffiles>4</amountoffiles>
<totalfilesize>3520124</totalfilesize>
</item>
</ACCESSRANGE>
-<MODRANGE>
-<item id="+1 year">
<amountoffiles>10</amountoffiles>
<totalfilesize>1814927</totalfilesize>
</item>
-<item id="-1 day">
<amountoffiles>2</amountoffiles>
<totalfilesize>29874649</totalfilesize>
</item>
-<item id="1 day-1 week">
<amountoffiles>1</amountoffiles>
<totalfilesize>207</totalfilesize>
</item>
-<item id="1 month-1 year">
<amountoffiles>61</amountoffiles>
<totalfilesize>70561729</totalfilesize>
</item>
-<item id="1 week-1 month">
<amountoffiles>4</amountoffiles>
<totalfilesize>51153753</totalfilesize>
</item>
</MODRANGE>
-<SIZERANGE>
-<item id="0-10KB">
<amountoffiles>20</amountoffiles>
242 Reports
Web service reports
<totalfilesize>26942</totalfilesize>
</item>
-<item id="100KB-1MB">
<amountoffiles>19</amountoffiles>
<totalfilesize>7200946</totalfilesize>
</item>
-<item id="10KB-100KB">
<amountoffiles>29</amountoffiles>
<totalfilesize>1745269</totalfilesize>
</item>
-<item id="10MB-100MB">
<amountoffiles>4</amountoffiles>
<totalfilesize>129892970</totalfilesize>
</item>
-<item id="1MB-10MB">
<amountoffiles>6</amountoffiles>
<totalfilesize>14539138</totalfilesize>
</item>
</SIZERANGE>
-<TYPES>
-<item id="0">
<amountoffiles>78</amountoffiles>
<totalfilesize>153405265</totalfilesize>
</item>
</TYPES>
</dataselection>
</mbe>
</mbe_range_statistics>
-<dataselectionlist_SIS_reporting>
<global_storagepool_VOL
unit="bytes">4176478208</global_storagepool_VOL>
<global_storagepool_SIS>0.03673077108511</global_storagepool_SIS>
</dataselectionlist_SIS_reporting>
<MBDataminingHistory/>
</MBDatamining>
format. You can import the XML output to a spreadsheet. See the following section
for more information:
See “Importing report output into a spreadsheet” on page 249.
Caution: For security reasons, use a Web browser that uses POST requests, not
GET requests, when retrieving Web service reports. For example, Microsoft
Internet Explorer does not use POST requests and is not secure.
You can also follow your spreadsheet’s instructions for importing the XML data.
For example, the following URL contains login information, password information,
and a request for information about successful job runs:
https://100.100.100.100/spa/ws/ws_getsuccessfuljobs.php?login=root&passwd=root
Note: The Web UI URL parameters are case sensitive. Make sure you type them
exactly as shown in this chapter. The ampersand (&) character acts as a separator
for the fields in the URL. The bracket characters in the following sections [ ]
represent optional URL fields.
The following sections describe reports that you can obtain through the Web
services:
■ See “Job status Web service reports” on page 243.
■ See “Dashboard Web service reports” on page 246.
■ See “Obtaining data mining policy output - the Web service report” on page 239.
The URL format for a Web service report on job statuses is as follows:
https://url/spa/ws/web_service?login=login&passwd=pwd[&filter][&filter]
244 Reports
Web service reports
Argument Meaning
url The URL for the storage pool authority. For example:
100.100.100.100.
web_service Specifies the type of Web service. The job status reports
generate information about successful, partially
completed, and failed jobs. Type one of the following:
■ ws_getsuccessfuljobs.php
■ ws_getpartialjobs.php
■ ws_getfailedjobs.php
Table 9-8 shows the filters you can specify on a Web service URL for the job status
reports.
Filter Meaning
Filter Meaning
To find a job ID, click Details in the right pane for a job
that has finished. The ID is on the General tab.
■ Data Removal
■ MS Exchange Backup
For example, assume that you want to examine statistics for restore jobs. You can
enter the following URL:
https://100.100.100.100/spa/ws/ws_getsuccessfuljobs.php?login=root&passwd=root&workflo
wName=Files and Folders Restore
<workflow>Restore Workflow</workflow>
<scheduledStartTime>1151762912</scheduledStartTime>
<startDate>1151762916</startDate>
<finishDate>1151762970</finishDate>
<dataselectionID>2</dataselectionID>
<dataselectionName>reroute</dataselectionName>
<statistics />
</job>
- <job>
<jobID>4</jobID>
<agentID>1000000</agentID>
<agentName>SPA</agentName>
<locationName>my location</locationName>
<departmentName>my department</departmentName>
<executionStatusID>2</executionStatusID>
<executionStatusName>SUCCESS</executionStatusName>
<workflow>Restore Workflow</workflow>
<scheduledStartTime>1151763438</scheduledStartTime>
<startDate>1151763439</startDate>
<finishDate>1151763545</finishDate>
<dataselectionID>2</dataselectionID>
<dataselectionName>reroute</dataselectionName>
<statistics />
</job>
.
.
.
The preceding output has been truncated at the end for inclusion in this manual.
If you run a report that contains information about backup jobs, the information
PureDisk returns contains the same statistics that you can obtain from clicking
Data Mining Report in the left pane after a data mining workflow was run.
data every 15 minutes. The timestamp is shown at the beginning of the XML
report.
The report is formatted in XML. The URL format is as follows:
https://url/spa/ws/ws_dashboard.php?login=login&passwd=pwd&filterType=type&filterID=id
&action=getDashBoard
Argument Meaning
url The URL for the storage pool authority. For example:
100.100.100.100.
login The storage pool authority administrator login. For example: root.
Specify agent.
Tip: Obtain this id number before you start to type the URL for the
Web service report. If you begin to type the report URL into a browser’s
address field, and have to click in the PureDisk Web UI to retrieve this
id information, you lose the information you typed into the address
field. Alternatively, you can also retrieve the id in a different window.
For example, assume that you want to obtain a dashboard Web service report for
an agent. You can enter the following URL:
https://valhalla.minnesota.com/spa/ws/ws_dashboard.php?login=root&passwd=root&filterTy
pe=agent&filterID=3&action=getDashBoard
248 Reports
Web service reports
<JobID>77</JobID>
<AgentID>33000000</AgentID>
<Workflow id="13500">Maintenance</Workflow>
<Policy id="8">System policy for Maintenance</Policy>
<PolicyRunID>28</PolicyRunID>
<Scheduled>2008-03-15 06:20:01</Scheduled>
<Start>2008-03-15 06:20:03</Start>
<Stop>2008-03-15 06:20:23</Stop>
<Status id="2">SUCCESS</Status>
</Job>
</Jobs>
<JobSteps/>
-
<Statistics id="33000000" TimeStamp="2008-03-15 09:30:01"
xml:base="/Storage/var/stats_33000000.xml">
-
.
.
.
The preceding output has been truncated at the end for inclusion in this manual.
Storage Edition / Agents Lists each individual feature license and lists
the PureDisk edition license that is installed
on this storage pool. Certain PureDisk
features require separate licenses.
Active License keys The number of valid license keys that are
installed on the registered storage pool. You
can install the same license key on multiple
storage pools, but this report lists each key
only once.
To view a licenses and features report for a particular license type or all types
◆ Use the Filter on feature pull-down menu to select a license type.
Your choices are as follows:
■ All
■ Premium Infrastructure
■ Windows Application & Database Pack
■ Standard Agent
Last updated on The date and time when the report data was
created.
Windows application and database pack The number of application program agents
deployed in this storage pool.
/Storage/log
For each seven-day interval, PureDisk retains up to 1000 lines of logging messages
in the active log file in /Storage/log. Note that log files from PureDisk services
are often greater than 5 MB in length, but PureDisk does not retain job log files
that are greater than 5 MB in length.
PureDisk uses the standard Linux log rotation mechanism to rotate the audit log
every seven days. Log rotation ensures that the log files do not become too large.
PureDisk moves old logging information into separate files and compresses the
files to save space. PureDisk does not remove old log files. You can examine the
old log files in /Storage/log. The old files are named
/Storage/log/audit.log.1.bz2, /Storage/log/audit.log.2.bz2, and so on.
The last 1000 lines of every log file are always accessible in the /Storage/log
directory.
The following describe log files:
■ See “Content router log files” on page 262.
■ See “Metabase engine log file” on page 265.
■ See “Workflow engine log file” on page 268.
262 Log files and auditing
About the log file directory
January 17 16:10:11 INFO [1076910400]: Task Manager: started task 0 [thread 1079552320]
for 192.168.163.1:1636
January 17 16:10:11 INFO [1079552320]: Remote is using libcr Version 6.5.0.6792, Protocol
Version 6.1 running on
WIN32. Agent pdbackup.exe requesting access for DataSelection ID 7
Example 2. The following shows the metabase engine (192.168.163.132 = MBE IP)
requesting a POList (MBE-CLI application) from system data selection 1:
January 17 16:10:16 INFO [1076910400]: Task Manager: started task 0 [thread 1079552320]
for
192.168.163.132:51050
January 17 16:10:16 INFO [1079552320]: Remote is using libcr Version 6.5.0.6792, Protocol
Version 6.1 running on
Linux-x86_64. Agent MBE-CLI requesting access for DataSelection ID 1
If you want an overview of all incoming single-stream backups (PutFiles), you can
search the spoold.log file, as follows:
January 17 14:10:12 INFO [1076910400]: Task Manager: started task 0 [thread 1079552320]
for 192.168.163.1:3738
January 17 14:10:12 INFO [1079552320]: Remote is using libcr Version 6.5.0.6792, Protocol
Version 6.1 running on
WIN32. Agent PutFiles requesting access for DataSelection ID 1
--
January 17 15:10:10 INFO [1076910400]: Task Manager: started task 0 [thread 1079552320]
for 192.168.13.41:4438
January 17 15:10:10 INFO [1079552320]: Remote is using libcr Version 6.5.0.6792, Protocol
Version 6.1 running on
WIN32. Agent PutFiles requesting access for DataSelection ID 7
--
January 17 16:10:11 INFO [1076910400]: Task Manager: started task 0 [thread 1079552320]
for 192.168.163.14:1638
January 17 16:10:11 INFO [1079552320]: Remote is using libcr Version 6.5.0.6792, Protocol
Version 6.1 running on
WIN32. Agent PutFiles requesting access for DataSelection ID 4
--
January 17 17:10:13 INFO [1076910400]: Task Manager: started task 0 [thread 1079552320]
for 192.168.163.1:2201
January 17 17:10:13 INFO [1079552320]: Remote is using libcr Version 6.5.0.6792, Protocol
Version 6.1 running on
WIN32. Agent PutFiles requesting access for DataSelection ID 9
In the preceding grep(1) command, the -B 1 parameter specifies to show the line
before the match, so the connecting client IP address is also displayed.
For each transaction log, PureDisk logs the number of actions per type. For
example:
If you want to specify that the log files include more information, include the
--trace parameter when you restart the content router. For example:
If you specified the --trace parameter, later you can specify the following
to disable tracing:
If your log file is large, you can search for the information you want. For example,
type the following command to display all imports for data selection 7:
PureDisk:/Storage/log # grep 'Task \[7-' mbe.log
The metabase engine disk evaluator logs disk usage every 5 minutes. For example:
The pdwfe.log file contains information about the following common workflow
engine actions:
■ About the watchdog:
■ About agents when they request the next job step (nextJobStep web service):
You can retrieve log information related to a single job. For example, to obtain
workflow engine log information related to job ID 24, type the following command:
In this example, there are two job steps processes: ProcessJobStatistics and
MBImportAction. All log lines that relate to these job steps have the same thread
ID: 1080609088 for ProcessJobStatistics and 1077438784 for MBImportAction.
The location of this file differs depending on your platform. For example, on a
Windows client, agent.cfg is located in install_dir\Program
Files\Symantec\NetBackup PureDisk Agent\etc\agent.cfg. When you edit
this file, go into the debug section, and set the debug parameter to 1.
■ You can log on to the PureDisk node with a Windows terminal client such as
Putty. Ensure that the terminal client uses the UTF-8 character set and a font
that contains the international characters that you need to display.
■ If you log on to the PureDisk node directly through the console, PureDisk does
not display international characters properly. Use one of the previous methods
to view log files with international characters.
■ Deactivating a service
For example, you might need to add nodes or services. To determine when to add
additional services, perform the following tasks on a regular basis:
■ Examine events from the system monitor script.
The system monitor script monitors system activity. By default, it runs every
five minutes and sends a status message. To see the messages, click Monitor
> Alerts & Notification. In the right pane, pull down Application, and type
MonitorStatistics in the Look for: field.
For more information, see the following:
The PureDisk Backup Operator's Guide.
■ Display capacity dashboards.
For more information, see the following:
See “About Dashboard reports” on page 249.
For example, you might need to add the following additional services:
■ Metabase engine.
One metabase engine service can support 1,000 clients. Add an additional
metabase engine service if your site needs to support more than 1,000 clients.
As you add new clients, PureDisk assigns them to the new metabase engine.
It does not move clients from one metabase engine to another metabase engine.
■ Content router.
Add an additional content router service if the /Storage/data partition fills.
The system monitor script’s report and the capacity dashboard include
information on the disks that have reached their capacity. For more information
about adding content routers and content router rerouting, see the following:
See “Rerouting a content router and managing content routers” on page 288.
■ A NetBackup export engine.
This service lets you send content router data to a NetBackup storage unit.
The following explain how to perform reconfiguration tasks:
■ See “Adding a service to a node” on page 282.
■ See “Activating a new service in the storage pool” on page 287.
■ See “Rerouting a content router and managing content routers” on page 288.
■ See “Deactivating a service” on page 296.
■ See “Managing license keys” on page 299.
■ See “About central reporting” on page 300.
■ See “Rerouting a metabase engine” on page 304.
■ See “About clustered storage pool administration” on page 310.
282 Storage pool management
Adding a service to a node
http://URL/Installer
For URL, type the FQDN of the node that hosts the storage pool authority
service.
Storage pool management 283
Adding a service to a node
3 Click Next on the wizard's pages until you arrive at the Services Configuration
page.
4 On the Service Configuration page, perform the following steps:
■ Click Change.
■ Select the service you want to add.
■ Click Next when the Services Configuration page is complete.
8 (Conditional) Visually inspect the Cluster Manager Java Console and make
sure that all resources now appear with a status of Online.
Perform this step if the storage pool is clustered.
9 (Conditional) In the Cluster Manager Java Console, right-click the service
group and select Unfreeze.
Perform this step if the storage pool is clustered.
10 (Conditional) Verify that the service was added successfully.
Perform this step if you added a content router or a metabase engine.
Proceed as follows:
■ If you added a content router, see the following:
See “Verifying and specifying content router capacity” on page 285.
■ If you added a metabase engine or a NetBackup export engine, see the
following:
See “Activating a new service in the storage pool” on page 287.
284 Storage pool management
Adding a service to a node
Adding a new node and at least one new service on the new node
The following procedure explains how to add a new node and a service.
When you add new nodes to a clustered storage pool, make sure to add them one
at a time.
To add a service to a new node
1 Install PDOS on the computer that you want to configure as a new node.
Use the instructions in the PureDisk Storage Pool Installation Guide to install
PDOS.
2 In a browser window, type the following to start the storage pool configuration
wizard:
http://URL/Installer
For URL, type the FQDN of the node that hosts the storage pool authority
service.
3 Click Next until you arrive at the Storage Pool Node Summary page.
4 Visually inspect the Storage Pool Node Summary page and determine if the
new node appears.
If the new node does not appear, click Add Node and add the node. Use the
instructions in the PureDisk Storage Pool Installation Guide to add the node.
5 Click Next until you arrive at the Storage Selection pages.
Use the instructions in the PureDisk Storage Pool Installation Guide to
configure storage for this node. If no disks appear in the wizard, it might be
because your disks need to be formatted or repartitioned.
6 Click Next until you arrive at the Services Configuration page.
Use the instructions in the PureDisk Storage Pool Installation Guide to
configure services on this node.
7 Click Next until you arrive at the Implementation page.
8 On the Implementation page, click Finish.
9 (Conditional) If TCP/IP settings on the other nodes have been changed to
improve replication job performance, run the following script on the new
node:
# /opt/pdconfigure/scripts/support/tcp_tune.sh modify
Proceed as follows:
■ If you added a content router, see the following:
See “Verifying and specifying content router capacity” on page 285.
■ If you added a metabase engine or a NetBackup export engine, see the
following:
See “Activating a new service in the storage pool” on page 287.
7 Perform step 3 through step 6 for each content router in the storage pool.
8 Your next action depends on which type of service you changed or added, as
follows:
If you edited the information for an active content router, perform the
procedure in the following section:
See “Rerouting a content router and managing content routers” on page 288.
If you edited the information for the content routers that you installed as
part of a new storage pool, perform the procedure in the following section:
See “Rerouting a content router and managing content routers” on page 288.
http://URL/Installer
For URL, type the FQDN of the node that hosts the storage pool authority
service.
3 Click Next until you arrive at the Storage Pool Node Summary page.
4 Visually inspect the Storage Pool Node Summary page and determine if the
new node appears.
If the new node does not appear, click Add Node and add the node. Use the
instructions in the PureDisk Storage Pool Installation Guide to add the node.
When the new node appears in the node summary, perform the following
steps:
■ Select the node you want to configure as a passive node.
Storage pool management 287
Activating a new service in the storage pool
# /opt/pdinstall/prepare_additionalNode.sh addr[,addr,...]
For addr, type the IP address of the public NIC on the new node.
# /opt/pdconfigure/scripts/support/tcp_tune.sh modify
9 Use the Cluster Manager Java Console to perform a manual failover to the
new node.
Symantec recommends that you test a manual failover to this new node at a
time that is convenient in your schedule. When you perform a manual failover,
your storage pool will be temporarily offline. See the instructions on how to
perform a manual failover in the Veritas Cluster Server (VCS) documentation.
Note: When you reroute the storage pool, PureDisk moves data between
content routers. This process requires some free storage space on each of the
content routers. If a content router has no more storage available, your
rerouting might take much longer. Determine whether to run your data
selection removal policies and data removal policies to free some storage
space before you start the rerouting process.
You have requested activation of this content router, but have not yet started
rerouting.
■ Deactivation requested
You have requested deactivation of this content router, but have not yet started
rerouting.
■ Activation pending
During rerouting, content routers that you have activated change from the
state "Activation requested" to "Activation pending" as soon as the actual
rerouting of data starts.
■ Deactivation pending
During rerouting, content routers that you have deactivated change from the
state "Deactivation requested" to "Deactivation pending" as soon as the actual
rerouting of data starts.
■ Active
This content router is active.
■ Inactive
This content router is inactive.
In this case, you can still make changes, either to activate or to deactivate the
content router, before you start the rerouting process again. However, try to avoid
such situations because they result in unnecessary data movement between
content routers.
■ Second level. Backups stop when the first content router in a storage pool
reaches this level. This level is the warning threshold.
292 Storage pool management
Rerouting a content router and managing content routers
■ Third level. Data from the spool area can continue to fill the content routers
even after the backups stop. The content routers are full.
If your content routers fill up, perform one or more of the following actions:
■ First, run a data removal policy. If you know that you have a lot of unneeded
data on the content router, this process frees up needed space. For information
on data removal policies, see the PureDisk Backup and Restore Guide.
■ Second, add another content router and reroute your data. Because you have
full content routers, this process is very slow. Use the procedures in this
chapter, and perform this action if the data removal policy did not free up
enough space.
■ Third, call Symantec technical support.
1 1 Parallel
2 1 or 2 Parallel
3 2, 3, or 4 Parallel
4 1 Serial
4 4 Parallel
2 Examine the job log for network errors or other environmental factors.
296 Storage pool management
Deactivating a service
Deactivating a service
The following procedure explains how to deactivate a content router or a
NetBackup export engine in a storage pool. Other services cannot be deactivated.
Before you deactivate a content router, check the capacity of the other content
routers in your storage pool. For information about how to check this capacity,
see the following section:
Preparing to deactivate a content router
Assume that you want to remove content router 3 by deactivating it and rerouting
the storage pool. You must reroute and redistribute the 600 GB of data on content
router 3 to content router 1 and content router 2. Together, content router 1 and
content router 2 have 600 GB free. The deactivation appears feasible. However,
this plan is not feasible because the rerouting would fill each content router to
100% capacity. The rerouting process requires that the host that receives the data
has a margin of excess capacity.
A content router always has an internal soft limit and an internal hard limit on
capacity. A content router requires a margin of excess capacity to function. Another
reason for maintaining a margin is that the rerouting process is not always even.
Content router 1 might receive 300 GB of data and reach its limit before content
router 2 received 100 GB of data. The rerouting process would fail even though
content router 2 had excess capacity.
For more information about soft limits and hard limits, see the PureDisk Backup
Operator's Guide.
Before you proceed to the next step, ensure that you properly prepared to
deactivate the content router.
See “Preparing to deactivate a content router” on page 296.
4 In the right pane, click Deactivate Content Router or Deactivate NetBackup
Export Engine.
For a content router, the status changes to Deactivation requested.
Storage pool management 299
Managing license keys
5 In the right pane, respond to the question about whether to reroute now or
whether you want to make more changes.
6 Select the storage pool.
7 In the right pane, click Reroute Content Router.
This selection starts the rerouting process, which redistributes data over all
active content routers. The process moves data from the content router in
Deactivation requested status to the content routers that you want to remain
active. Early in the rerouting process, the state of the content router changes
from Deactivation requested to Deactivation pending. At the end of the
rerouting process, PureDisk sets its state to Inactive.
Wait for rerouting to complete successfully before proceeding to the next
step.
For more information about the rerouting process, see the following:
See “Rerouting a content router and managing content routers” on page 288.
8 (Conditional) Take offline the cluster group to which the active service belongs.
Perform this step only if the following are both true:
■ The storage pool is clustered.
■ This service is the only remaining active service on the node.
From the Cluster Manager Java Console, right-click the cluster group, and
select Offline > All Systems.
# /opt/pdinstall/add_central_reporting.sh
4 Add one or more storage pools to this new central storage pool.
See “Adding a remote storage pool to a central storage pool” on page 301.
6 Click Add.
When you add a storage pool, PureDisk queries for a list of other storage
pools that are known (through replication) to that storage pool. PureDisk
adds these linked storage pools to the central storage pool list.
# /opt/pdinstall/del_central_reporting.sh
Storage pool management 303
About central reporting
4 For each agent that you want to move, record the agent ID information from
the Agent Dashboard display.
For example, you can record the information below:
5 In the right pane, note the Agent Address field, and record the FQDN of the
new metabase engine.
Metabase engine node’s identification __________________________
6 Reroute the agents on the metabase engine.
See “Rerouting the agents on the metabase engine” on page 308.
# cd /opt/pdspa/cli
agent_id Specify the agent ID of one of the agents you want to move.
new_mbe_id Specify the node identification for the new metabase engine. This
value is the FQDN, host name, or IP address as it appears in the
administrative Web UI.
The rerouting script fails if you specify a host name and the
identifier in the Web UI is an IP address (or vice versa).
# /etc/init.d/pdagent restart
3 In the right pane, look for Metabase Engine: and make sure that the agent is
attached to the new metabase engine.
4 Test the new configuration by running a manual backup from this agent.
Troubleshooting
You might need to abort a metabase engine rerouting job or a metabase engine
rerouting job might fail. The following procedure returns a storage pool to the
state it was in before you started a metabase engine rerouting job.
To troubleshoot a failed metabase rerouting job
1 Log on to the storage pool authority as root.
2 Type the following command to change to the PureDisk commands directory:
# cd /opt/pdspa/cli
# /opt/pdag/bin/php MBEHeal.php
# passwd
When the command issues prompts, type the old and new passwords.
http://URL/Installer
For URL, type the FQDN of the node that hosts the storage pool authority
service.
2 Click Next until you arrive at the Regenerate Passwords page.
3 Click Regenerate Passwords.
4 Wait for the process to complete.
5 Click Cancel.
increase the number of client connections. Each content router requires a certain
amount of memory per client, and this calculation is as follows:
(2 X segment_size) + 512KB
512 KB is the stack size for the client thread.
The following procedure explains how to increase the number of clients.
To increase the number of clients
1 Click Settings > Configuration.
2 In the left pane, expand Configuration File Templates > PureDisk
ContentRouter > Default ValueSet for PureDisk ContentRouter >
ContentRouter > MaxConnections > All OS:number.
3 Select All OS:number.
4 In the right pane, in the Value field, increase the present number to the
number of clients + 5.
Five slots are reserved. The maximum value you can specify is 8192.
5 Click Save.
6 In the left pane, select TaskThreadStackSize.
7 In the right pane, select Add Configuration File Value.
8 On the Properties: Configuration File Value screen, change the Value field
to 256 and click Add.
This value is the stack size for client threads.
9 Restart the content router.
For information about how to restart the content router or other processes
see the following:
■ See “Stopping and starting processes on one PureDisk node (unclustered)”
on page 314.
■ See “Stopping and starting processes on one PureDisk node (clustered)”
on page 317.
■ See “Stopping and starting processes in a multinode PureDisk storage
pool” on page 318.
operation guarantees that all nodes have the time setting. However, the time on
a PureDisk node can become incorrect in exceptional cases, such as when the NTP
server fails or the connection between the storage pool and the NTP server fails.
If you notice an incorrect time setting on a PureDisk node, use the following
procedure to adjust the clock in a safe way.
To adjust the clock on a PureDisk node if the time difference is less than one day
1 Stop all PureDisk services on the node.
Typically, this action causes running jobs to fail. For information about how
to stop and start processes, see the following:
■ See “Stopping and starting processes on one PureDisk node (unclustered)”
on page 314.
■ See “Stopping and starting processes on one PureDisk node (clustered)”
on page 317.
■ See “Stopping and starting processes in a multinode PureDisk storage
pool” on page 318.
2 (Conditional) Make sure that the NTP server works properly and can be
reached from the node.
Perform this step if the node you want to fix hosts the storage pool authority
service.
3 Adjust the time on the node.
4 Start the PureDisk processes on the node.
For information about how to stop and start processes, see the following:
■ See “Stopping and starting processes on one PureDisk node (unclustered)”
on page 314.
■ See “Stopping and starting processes on one PureDisk node (clustered)”
on page 317.
■ See “Stopping and starting processes in a multinode PureDisk storage
pool” on page 318.
To adjust the clock on a PureDisk node if the time difference is more than one day
◆ Contact Symantec technical support.
314 Storage pool management
Adjusting the Web UI time-out interval
<session-timeout>30</session-timeout>
# /etc/init.d/puredisk stop
For example, assume that you had to abort the installation of a PureDisk
environment. Before trying to install the software again, you need to stop all
the services on the host. You can enter the preceding command to stop all
services correctly before you try to reinstall the environment.
# /etc/init.d/puredisk start
# /etc/init.d/puredisk restart
If you want to stop more than one service, enter a space character
between each service. Stop them in the following order:
If you want to start more than one service, enter a space character
between each service. Start them in the following order:
For example:
4 Visually inspect the display and make sure that all resources of pd_group1
now appear with a status of Online.
5 Right-click the resource group (pd_group1) and select Unfreeze.
This action unfreezes the node.
318 Storage pool management
Stopping and starting processes in a multinode PureDisk storage pool
faulted in the PureDisk Web UI. If this is the case, complete the following
steps:
■ Right-click that resource and select Clear Fault - Auto.
■ After you clear the fault, the resource appears as Offline. Although it
starts again, specify to VCS that you want it to monitor the resource. To
enable monitoring again, right-click the resource and select probe - node.
This assumes that node’s other services are online currently.
4 Visually inspect the display and make sure that all resources of resource_group
now appear with a status of Online.
5 Right-click a resource_group and select Unfreeze.
This action unfreezes the resource group. Perform this step for all resource
groups.
# /etc/init.d/puredisk status
320 Storage pool management
Restarting the Java run-time environment
# /etc/init.d/puredisk start
Chapter 12
Reconfiguring your
PureDisk environment
This chapter includes the following topics:
Note: Do not edit a line if its default value contains brace characters. For example
All OS:{{$agentid}}.These are system variables.
3 (Conditional) Expand the tree in the left pane and select value set copy.
Perform this step if the copy of the value set does not already appear in the
left pane.
For example, select PureDisk Client Agent > Copy of Default ValueSet for
PureDisk Client Agent.
4 In the right pane, click Assign template.
5 Select the entities that you want to use this value set.
6 Click Assign.
7 (Conditional) Click Push Configuration Files.
Perform this step if you want the changes you made to become permanent.
The list of members should include the services you selected in 5.
PureDisk monitors the configuration files that are pushed to each agent and
checks if the value set has changed. PureDisk performs this check and creates
update jobs only if the value set has changed since the last update job ran for
each agent. For example, if you push a value set to an agent twice without
changing the value set, PureDisk creates only one job.
If you use the Force option, the server-side change checking is ignored and
an update job is always created.
8 (Conditional) Click Push.
Perform this step if you also performed step 7.
Confirm your actions in the dialog boxes that appear.
Note: If you edit these files with a text editor, you cannot push them to the storage
pool. Also, any subsequent changes that you make with the Web UI overwrite the
manual changes you made with a text editor. Symantec recommends that you use
this method only if instructed to do so by a Symantec technical support
representative.
Controller /etc/puredisk/pdctrl.cfg
■ Example 2.
You accidentally make erroneous edits to a configuration file.
You might change configuration file parameters and later want to revert to
the original configuration file. If the old configuration file exists and has a
valid agent ID, you can obtain a new copy from the storage pool authority.
■ Example 3 (Linux only).
If you have SPAR enabled and you need to retrieve a new configuration file.
SPAR enables you to replicate storage pool information from a remote storage
pool to a main storage pool. The remote storage pool acts as a client to the
main storage pool.
For more information about how to use SPAR, see the PureDisk Administrator’s
Guide.
In the following procedure, the format for the pdregister command is shown
generically for MS Windows or UNIX clients. The .exe suffix applies only to
Windows clients.
To retrieve a new configuration file for a client
1 Invoke the PureDisk Web UI and make sure that the client appears in the list
of clients in the left pane when you select Manage > Agent.
Do not perform this procedure if the client is not registered on the storage
pool currently.
2 Log on to the client system as root (Linux or UNIX platforms) or as the
administrator (Windows platforms).
3 Change to the directory into which you installed the agent software.
On Linux and UNIX platforms, change to install_path/pdag/bin. The default
is /opt/pdag/bin.
On Windows platforms, change to the directory into which you installed the
agent. By default, this directory is C:\Program Files\Symantec\NetBackup
PureDisk Agent\bin.
This set of parameters assumes that the original configuration file still resides
on the client. Your intent is to restore it to the form it has on the storage pool
authority. You do not need to specify the agent ID or the logon credentials.
More information about the parameters and arguments that pdregister
accepts is available.
See the PureDisk Client Installation Guide
5 (Conditional) Activate the agent.
Perform this step if the agent is not activated.
More information about how to activate the agent is available.
See the PureDisk Client Installation Guide
To reset the data lock password on a client
1 Invoke the PureDisk Web UI and make sure that the client appears in the list
of clients when you click Manage > Agent.
Do not perform this procedure if the client is not registered on the storage
pool currently.
2 Log on to the client system as root for UNIX clients and as admin for MS
Windows clients.
330 Reconfiguring your PureDisk environment
Updating the agent configuration files on a client
3 Change to the directory into which you installed the agent software.
On Linux and UNIX platforms, change to install_path/pdag/bin. The default
is /opt/pdag/bin.
On Windows platforms, change to the directory into which you installed the
agent. By default, this directory is C:\Program Files\Symantec\NetBackup
PureDisk Agent\bin.
Make sure that the logon and password belong to a user that has Agent
Management permissions.
Chapter 13
Tuning and optimization
This chapter includes the following topics:
■ maxstreams.
The default value is All OS:1.
■ MaxSegmentPrefetchSize. Specifies the number of bytes to prefetch
during a restore. The default is 16MiB. Valid units are B, KiB (1024), MiB,
GiB, KB (1000), MB, and GB.
When set to zero (0), PureDisk disables prefetching. During restore,
PureDisk fetches only one data segment at a time. This behavior is identical
to the default restore behavior before PureDisk 6.6.
If you restore to a client that is very low on memory, and you want to
ensure that memory use is low during the restore job, you can set this
value to zero (0) or to a value that is less than zero.
■ SegmentChunkSize. Specifies the number of bytes of data to transfer
over the network from the server to the client at one time. The default is
32KiB. Valid units are B, KiB (1024), MiB, KB (1000), and MB.
The range for this setting is from 1KiB through 16 MiB.
This setting has no effect if MaxSegmentPrefetchSize is set to zero (0).
■ /opt/pdag/tmp
During very large backups, these lists can grow beyond the space that you allocated
to the / partition, which is typically kept relatively small. If you expect this space
problem might happen on a client, use the following procedure to modify the
agent configuration file on that system.
Note: Repeat this procedure each time you update the agent configuration files
through the Web UI. The repetition is necessary because updates through the
Web UI overwrite all agent configuration files.
334 Tuning and optimization
Tuning backup and restore performance
# /opt/pdag/bin/pdagent --stop
# vi /etc/puredisk/agent.cfg
3 In the [paths] section, type new paths for the following parameters:
var
temp
The new paths must be full paths, not relative paths. They must refer to a
partition large enough for the backups.
4 Save and close the file.
5 Type the following command to start the agent service:
# /opt/pdag/bin/pdagent
You might need to experiment with more than one approach to performance
tuning, or you might need to use different combinations of streams and segment
size values. The exact values depend on the specific client and its hardware
configuration.
Symantec recommends that you start with a small number of streams and that
you increase max_streams only if the backup performance is unacceptable. At
some point, if you increase max_streams, performance does not improve. A
max_streams value that is too large provides no benefit, can overload the client
system, and can cause backups to fail.
Note: If these two options are not set identically on all agents, the effectiveness
of PureDisk deduplication option (PDDO) can be reduced. If you enable PDDO, the
MATCH_PDRO parameter is enabled by default. When enabled, the MATCH_PDRO
parameter specifies that the PureDisk calculate the segment size based on the file
size, which is the same method by which PureDisk calculates the segment size
for a typical backup.
File-type segmentation
A comma-separated list of file types (identified by file name suffixes) can be set
in the UI or in the agent configuration file.
As a PureDisk agent transfers a file to a content router, it checks whether the
file’s suffix is contained in this list. If the suffix is in the list, the file is transmitted
in only one segment. If the file is larger than the maximum segment size, it is still
transmitted in multiple segments. However, each segment is the maximum
segment size , except possibly the last segment.
Tuning and optimization 337
Tuning replication performance
Unexpected results
Multiple backup streams or restore streams can have the following unexpected
results:
■ Multistreaming can increase the load on content routers.
■ With some fast network or hardware configurations, multistreaming can slow
the performance of other backup or restore jobs.
For example, if the PureDisk metabase is on the same node as a content router,
the metabase import step can take longer. The content router might not have
completed its processing, because the spool is filled faster than the content
router empties it. This delay of the metabase import can lead to a longer total
backup time, but during that time, the client and the wire are idle.
■ Multistreaming can stop other agents from connecting to a content router
because each job stream needs one of the limited connections (limited to 300).
■ Client backup and restore performance might be affected if all parallel streams
do central-processor-bound fingerprint calculations.
■ If you run too many parallel backup streams, you can overload the client
system. When this occurs, the system generates the following error: Unable
to resolve host name. PureDisk generates this error because each backup
stream has its own content router context. Because of the unique contexts,
each stream makes DNS address lookups for the content router address(es).
If the content router does not get a response from DNS lookups in one minute,
it reports this error.
If backups have many streams, DNS lookups can fail when the central processor
usage is high and some streams are not rescheduled fast enough.
■ If you add multistreaming after the CPU/wire is full, you see no performance
increase.
the specified bandwidth value. For example, if you use 4 streams and a
bandwidth of 200 KB, PureDisk uses 800 KB of bandwidth between storage
pools during replication.
■ Network errors. If you want to minimize the effect of network errors on
replication, you can use the following parameters: maxretrycount, sleeptime,
and maxsleeptime.
For example, the replication process might not be able to complete because of
network errors. In this case, you can specify the number of times that the
source storage pool needs to wait before trying to send the data again to the
destination storage pool. Depending on network conditions, you might specify
to wait a few seconds or wait a few minutes.
To tune replication performance
1 Log into the Web UI of the source storage pool.
2 Click Settings > Configuration > Configuration File Templates > PureDisk
Server Agent > Default ValueSet for PureDisk Server Agent > replication.
3 Change the values for one or more of the following configuration parameters
under replication:
■ bandwidthlimit. Specifies the bandwidth limit, between the storage pools,
for the replication activity, in KB/sec. If you set this parameter to zero (0),
you specify unlimited bandwidth. The default is 0 (disabled).
■ maxretrycount. Specifies the maximum number of times that the PureDisk
replication workflow can attempt to send data. Specify any positive integer.
The default is 10.
■ maxsleeptime. Specifies the maximum amount of time, in seconds, that
the replication workflow is permitted to sleep between retries. Specify
any positive integer. The default is 60.
■ maxstreams. Specifies the maximum number of PDDO replication streams
per replication job. The maximum value is 10; if you set this parameter
to a value greater than 10, PureDisk uses 10 streams per replication job.
Specify any positive integer. The default is 4.
■ sleeptime. Specifies the number of seconds that the replication workflow
can sleep between 2 retries. Specify any positive integer. The default is
10.
■ Synchronizing passwords
■ Configuring VCS
for each node, explains each node's role in the storage pool, shows the public
network, and shows the heartbeat networks.
node3 - Spare
NIC 1:
Host IP: 100.100.100.120
Host FQDN: node3.acme.com
Virtual IP:
Virtual FQDN:
NIC 2:
Attached to private network. No IPs.
NIC 3:
Attached to private network. No IPs.
SAN
The following describe how to install the VCS software on a failed cluster node:
■ See “(Conditional) Examining the NICs for the private heartbeats” on page 343.
Installing the clustering software 343
(Conditional) Examining the NICs for the private heartbeats
# ip a
1: lo: <LOOPBACK,UP> mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 100
link/ether 00:11:43:e4:0b:2a brd ff:ff:ff:ff:ff:ff
inet 10.80.92.102/21 brd 10.80.95.255 scope global eth0 # << note the IP addr
inet6 fec0::80:211:43ff:fee4:b2a/64 scope site dynamic
valid_lft 2591999sec preferred_lft 604799sec
inet6 fe80::211:43ff:fee4:b2a/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:11:43:e4:0b:2b brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop qlen 1000
link/ether 00:04:23:b0:4f:86 brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop qlen 1000
link/ether 00:04:23:b0:4f:87 brd ff:ff:ff:ff:ff:ff
6: sit0: <NOARP> mtu 1480 qdisc noop
link/sit 0.0.0.0 brd 0.0.0.0
# ip a
1: lo: <LOOPBACK,UP> mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 100
link/ether 00:11:43:e4:0b:2a brd ff:ff:ff:ff:ff:ff
inet 10.81.92.102/21 brd 10.80.95.255 scope global eth0 # << note the IP addr
inet6 fec0::80:211:43ff:fee4:b2a/64 scope site dynamic
valid_lft 2591999sec preferred_lft 604799sec
inet6 fe80::211:43ff:fee4:b2a/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 100
Installing the clustering software 345
(Conditional) Examining the NICs for the private heartbeats
In example 2, you can use eth0 for the node’s public NIC. You need to remove
the addressing from eth1 and eth2 so you can use these NICs for the private
heartbeat.
4 Record the NIC information and the MAC address information from the ip
a(8) command output.
The YaST interface identifies each NIC by its MAC address. You need to know
the MAC addresses of the two NICs that you want to use for the private
heartbeat.
Make sure you gathered this information and recorded it on the planning
spreadsheet, PureDisk_ClusterPlanning.xls.
5 Proceed with the installation as follows, depending on your situation:
■ Step 1
346 Installing the clustering software
(Conditional) Examining the NICs for the private heartbeats
See “(Conditional) Removing addressing If you installed new NICs in the failed
from the private heartbeat NICs” nodes as part of your disaster recovery.
on page 346.
# yast
■ In the YaST Control Center main screen, select Network Devices >
Network Card.
■ On the Network Setup Method Screen, select Traditional Method with
ifup and select Next.
■ Step 2
■ Step 3
5 Select Finish.
6 Select Quit.
7 Proceed to one of the following, depending on your situation:
See “(Conditional) Examining the NICs for If you need to log into another node and
the private heartbeats” on page 343. configure the private heartbeat NICs in
that node without any addressing.
Synchronizing passwords
Perform the following procedures to synchronize and distribute the passwords
on all the nodes in your cluster.
# ssh-keygen -t rsa
■ Press Enter again to confirm the empty pass phrase at the following
prompt:
For example:
For node, specify the host FQDN of one of the other nodes.
2 Repeat the following step and issue the scp(1) command to copy the key file
to each of the other nodes:
■ Step 1
ssh host_FQDN_of_this_node w
For example, you can issue the following command from the storage pool
authority node in the cluster:
# ssh node1.acme.com w
Are you sure you want to continue connecting (yes, no) ? yes
5 Type additional ssh(1) commands from this node to each of the other nodes.
Use the ssh(1) command format from the following step, but specify the host
FQDN of another node:
■ Step 2
6 Repeat the following step for each of the other nodes in the cluster:
■ Step 5
Caution: Read the instructions that accompany each step in the following
procedure. You cannot install VCS in a PureDisk environment by pressing Y, y, or
Enter in response to each prompt. The result is an installation that is not
compatible with PureDisk. The procedure explains when to decline installation
of unnecessary or incompatible components.
# mkdir /cdrom
# mount /dev/cdrom /cdrom
# cd /cdrom/vcsmp3
# ./installer
Installing the clustering software 353
Installing the Veritas Cluster Server (VCS) software
Enter a Selection : Type I and press Enter.. Choice I indicates that you want to
[I,C,L,P,U,D,Q,?] install a product.
Select a product to Type 1 and press Enter.. Choice 1 indicates that you want to
install: [1-5,b,q] install the Veritas Cluster server.
Enter the system names Type the unique host FQDNs for each Specify the FQDNs of the nodes you
separated by spaces on node in the cluster. Use a space to need to recover. Do not specify the
which to install VCS: separate each FQDN. FQDNs of the nodes you do not need
to recover.
Do you want to enter Press Enter. The PDOS installer installed the
another license key for license keys that you need for VCS.
(Make sure to press Enter in
node? [y,n,q,?] (n) The VCS installer checks the license
response to the prompt for each
keys on all nodes in the cluster in
node.)
sequence and prompts you to install
more keys.
■ Press Enter.
■ Type n.
■ Type no.
Select the optional rpms Type 3 and press Enter. View the RPM descriptions and select
to be installed on all optional RPMs.
systems? [1-3,q,?] (1)
Do you want to install Press Enter. Press Enter to install the VRTSvcsmn
the VRTSvcsmn rpm on all package.
systems? [y,n,q] (y)
Do you want to install Type n and press Enter. Do not install the VRTSvcsApache
the VRTSvcsApache rpm on package.
all systems? [y,n,q] (y)
Do you want to install Press Enter. Press Enter to install the VRTSvcsdc
the VRTSvcsdc rpm on all documentation package.
systems? [y,n,q] (y)
Installing the clustering software 355
Installing the Veritas Cluster Server (VCS) software
Do you want to install Press Enter. Press Enter to install the VRTScscm
the VRTScscm rpm on all Cluster Manager Java console
systems? [y,n,q] (y) package. Symantec does not support
the Cluster Manager Web console on
PDOS platforms.
Do you want to install Type n and press Enter. Do not install the Veritas Cluster
the VRTScssim rpm on all Server Simulator package.
systems? [y,n,q] (y)
Press [Enter] to continue: Review the information that appears The installer displays the list of
and press Enter. packages to install after the following
heading:
■ VRTSvcsApache
■ VRTScssim
Press [Enter] to continue: Review the information that appears The installer displays the output from
and press Enter. installation checks after the following
heading:
Checking system
installation requirements:
Are you ready to Type n and press Enter. Do not configure VCS at this time.
configure VCS?
[y,n,q] (y)
Would you like to install Press Enter. Press Enter to install the cluster
Cluster Server on all server on all nodes simultaneously.
systems simultaneously?
[y,n,q,?] (y)
Cluster Server Review the displayed information and The installer displays progress
installation completed press Enter. messages.
successfully.
Press [Enter] to continue: Press Enter. several times in You do not need to take any action to
response to the PERSISTENT_NAME correct your configuration because
messages. persistent naming is guaranteed
through /etc/udev/rules.d/
30-net_persistent_names.rules.
The README.1st file has Press Enter Decide whether to read the README
more information about file and respond accordingly.
or
VCS. Read it Now? [y,n,q]
(y) Type n and press Enter.
# cd /
# umount /cdrom
# eject
Caution: Do not reboot now. Install VCS 4.1 MP4 before you reboot.
# ./installmp
4 Respond to the script’s prompts regarding the cluster as the following table
describes:
Enter the system names Type the unique host FQDNs for each Specify the FQDNs of the nodes you
separated by spaces on node in the cluster. Use a space to need to recover. Do not specify the
which to install VERITAS separate each FQDN. FQDNs of the nodes you do not need
Maintenance Pack 4: to recover.
Press [Enter] to continue: Press Enter. The installer displays the output from
communication checks after the
following heading:
Checking system
communication:
358 Installing the clustering software
Installing the Veritas Cluster Server (VCS) software
■ VRTSvxvmcommon
■ VRTSvxvmplatform
■ VRTSvxfscommon
■ VRTSvxfsplatform
Do you want to stop these Press Enter. The installer prompts you to stop the
processes and install processes on each node. Confirm the
patches on node? stop for each node.
[y,n,q] (y)
Note: The script repeats these questions for each node in the cluster. Respond
to the prompts for each node.
Installing the clustering software 359
Installing the Veritas Cluster Server (VCS) software
Press [Enter] to continue: Press Enter. The installer displays the output from
communication checks after the
following heading:
Would you like to upgrade Press Enter. You want to install the cluster server
VERITAS Maintenance Pack upgrades on all nodes simultaneously.
4 rpms on all systems
simultaneously?
[y,n,q,?] (y)
VERITAS Maintenance Pack Press Enter. This step completes the installation.
4 installation completed
successfully.
9 Repeat the following steps to install VCS 4.1 MP4RP3 on all failed nodes:
■ Step 7
■ Step 8
# cd /
# umount /cdrom
# eject
Configuring VCS
Use the following procedure to configure VCS. The first steps require you to gather
information about the cluster. You need to specify unique information for this
cluster.
To configure VCS
1 Refer to this storage pool's cluster planning spreadsheet to confirm both the
unique name for this cluster and the cluster ID number.
Note: The cluster ID number must be unique on your public network. Conflicts
with existing cluster IDs can generate unpredictable results. The cluster ID
number you specify in this procedure must be the same as the cluster ID that
the storage pool already uses.
2 Log into the node that you want to configure as the storage pool authority
node.
3 Change to the directory where the installation software resides.
For example:
# cd /opt/VRTS/install
# ./installvcs -configure
Installing the clustering software 361
Configuring VCS
Enter the system names Type the unique host FQDNs for each Specify a space-separated list of
separated by spaces on node in the cluster. Use a space to FQDNs. Type the node FQDNs as you
which to configure VCS: separate each FQDN. specified them in the /etc/hosts
file. For example: node1.acme.com
node2.acme.com
node3.acme.com
Press [Enter] to continue: Press Enter. The installer displays the output from
communication checks after the
following heading:
Checking system
communication:
VCS licensing verified Press Enter. Do not enter additional license keys.
successfully.
Do you want to stop VCS Press Enter. Stop the VCS processes.
processes? [y,n,q] (y)
VCS processes are stopped Press Enter. The script stops all VCS processes.
Enter the unique cluster Type a unique name for this cluster. This name must be unique on your
name: network. You cannot include spaces
or numbers in this name.
Enter the unique Cluster Type a unique ID number for this This number must be unique on your
ID number between cluster. network.
0-65535: [b,?]
Caution: Before you proceed to the next step, make sure to record the unique
cluster name and the unique cluster ID for this cluster.
For example:
Enter the NIC for the first private heartbeat link on node1: [b,?] eth0
eth0 is probably active as a public NIC on node1
Are you sure you want to use eth0 for the first private heartbeat link? [y,n,q,b,?] (n)
y
Would you like to configure a second private heartbeat link? [y,n,q,b,?] (y)
Enter the NIC for the second private heartbeat link on node1: [b,?] eth2
eth2 is probably active as a public NIC on node1
Are you sure you want to use eth2 for the second private heartbeat link? [y,n,q,b,?]
(n) y
Would you like to configure a third private heartbeat link? [y,n,q,b,?] (n)
Installing the clustering software 363
Configuring VCS
8 Press Enter to confirm that you do not want to configure a third private
heartbeat on the primary NIC.
For example:
Do you want to configure an additional low priority heartbeat link? [y,n,q,b,?] (n)
9 In response to the prompt regarding the NICs on the other nodes, type y or
n and press Enter.
When you type y, you affirm that each node contains NICs in the same order.
When you type n, you request that the system reissue the same prompts for
each node in the cluster. This prompt sequence differs for each installation
depending on your configuration.
Are you using the same NICs for private heartbeat links on all systems? [y,n,q,b,?] (y)
y
Do you want to configure Type n and press Enter. Do not configure Cluster Manager.
Cluster Manager (Web Symantec does not support the
Console) [y,n,q] (y) Cluster Manager on PDOS platforms.
Do you want to configure Type n and press Enter. Do not configure SMTP
SMTP notification?
[y,n,q] (y)
Do you want to configure Type n and press Enter. Do not configure SNMP.
SNMP notification?
[y,n,q] (y)
Press [Enter] to continue: Press Enter. several times in Press Enter to disregard all
response to the PERSISTENT_NAME PERSISTENT_NAME messages for
messages. each node and NIC.
The README.1st file has Press Enter Decide whether to read the README
more information about file and respond accordingly.
or
VCS. Read it Now?
[y,n,q] (y) Type n and press Enter.
12 Type the following command to check the status of the open links:
# lltstat -n
The output returns a table that shows the link statuses. The output includes
a line for each node and a link for each private heartbeat on each node. The
LINKS field shows the number of private heartbeats per node. Each one should
be in the OPEN state.
13 Repeat the following step on each node in the clustered storage pool:
■ Step 12
You need to create storage partitions on only the active nodes. Recreate the storage
partitions just as they were before the disaster. In other words, create a /Storage
partition, and, if necessary, create a /Storage/data partition and a
/Storage/databases partition, too. You do not need to create storage partitions
on a passive node.
Starting YaST
The following procedure explains how to start YaST.
366 Installing the clustering software
(Conditional) Using YaST to create the storage partitions
To invoke YaST
1 Log into one of the active nodes.
The active nodes are those upon which you want to install PureDisk services.
2 Type the following command to launch the SUSE Linux YaST configuration
tool:
# yast
You can type yast or YaST to invoke the interface. Do not type other
combinations of uppercase and lowercase letters.
3 In the YaST Control Center main screen, select System > Partitioner.
4 Select Yes on the warning pop-up.
5 On the Expert Partitioner screen, select VxVM.
6 (Conditional) Select Add Group.
You have the option to select Add Group only when at least one group is
configured currently.
7 On the Create a Disk Group pop-up, type a unique name for the disk group
on this node.
8 Select OK.
9 On the Veritas Volume Manager: Disks Setup screen, highlight a disk that
you want to include in the disk group.
10 Highlight Add Disk and press Return.
You can only add disks that are not yet partitioned. If you try to add a disk
with partitions, adding the disk to the disk group does not succeed. Delete all
partitions from the disk before you try to add partitions.
To delete all partitions on a disk, select Expert in the YaST interface and
select Delete Partition Table and Disk Label.
11 On the Add a name for the disk pop-up, type a name for the disk.
12 Select OK.
13 Repeat the following steps for all the disks that you want to include in the
disk group:
■ Step 9
■ Step 12
Installing the clustering software 367
(Conditional) Using YaST to create the storage partitions
14 Select Next.
15 Proceed to the following:
See “Creating the storage partitions” on page 367.
3 Select OK.
4 Select Next.
5 (Conditional) Create a /Storage/data partition.
Perform the following steps to create a /Storage/data partition if this node
had a /Storage/data partition before the disaster:
■ On the Veritas Volume Manager: Volumes screen, select Add.
The Create Volume pop-up appears.
■ On the Create Volume pop-up, in the Volume Name field, specify
Storage_data.
■ Select OK.
■ Select Next.
■ Select OK.
■ Select Next.
7 Select Apply.
8 On the Changes pop-up that appears, select Apply.
9 Select Quit.
10 Select Quit (again).
11 (Optional) Reboot the node.
Perform this step if you want to test it for reboot persistence.
12 Type the following command to view the disk summary for this node:
# vxdisk -o alldgs list
Appendix B
Command Line Interface
options for PureDisk
370 Command Line Interface options for PureDisk
General MAN page for PureDisk CLI
DESCRIPTION
NetBackup PureDisk offers customers a software-based data deduplication solution
that integrates with NetBackup. It provides customers with the critical features
required to protect all their data – from remote office to virtual environment to
datacenter. It reduces the size of backups with a deduplication engine that can be
deployed for storage reduction. It uses integration with NetBackup, for bandwidth
reduction using PureDisk clients. An open architecture allows customers to easily
deploy and scale NetBackup PureDisk using standard storage and servers.
NOTES
■ The command line interface commands are found only on the storage pool
authority in the /opt/pdcli/calls directory.
■ All man pages that are associated with the commands are located in the
/opt/pdcli/man directory.
■ The command line interface commands can be used to script activities. Be sure
the first command that is entered in the script is the pdlogonuser command.
If you do not run pdlogonuser, you are prompted for a user name and password
before each command is executed.
■ The contents of all man pages are collected in a PDF format for offline viewing.
See the PureDisk Command Line Interface Guide.
■ pdexport2nbu - Exports a data selection to a NetBackup files list for use with
a NetBackup policy.
■ pdfindfiles - Used to find the files that have been backed up.
■ pdcreategroup - Creates a new group that is used to organize users with the
same permissions.
■ pdcreatelocation - Creates a new logical grouping for one or more
departments.
■ pdcreatembgarbagecollectionpolicy - Creates a new metabase garbage
collection policy.
■ pdcreatepolicyescalation - Creates a new policy escalation.
■ pddeletegroup - Deletes a user group from the storage pool authority (SPA).
■ pddeletejob - Raises an error and tries to kill the job. If the job is running, it
does not delete the job. If the job is not running, it deletes the job.
■ pddeletelicense - Deletes a license key.
Get functions
■ pdgetagent - Provides additional information about the agent object specified.
■ pdgetjobsteps - Used to list the steps that are associated with the specified
job.
■ pdgetlicense - Collects information about the specified license key.
List functions
■ pdlistagent - Displays all agents that are associated with a particular PureDisk
environment.
■ pdlistdepartment - Displays all departments that are associated with a
particular PureDisk environment.
■ pdlistds - Displays all data selections that are associated with a particular
PureDisk environment.
■ pdlistdstemplate - Displays all the data selection templates.
■ pdlistevent - Displays all events that are associated with a particular PureDisk
environment.
■ pdlisteventescalation - Displays all the event escalations.
374 Command Line Interface options for PureDisk
General MAN page for PureDisk CLI
■ pdlistlocation - Displays all the locations that are associated with a particular
PureDisk environment.
■ pdlistpolicy - Displays all the policies that are associated with a particular
PureDisk environment.
■ pdlistpolicyescalation - Displays the policy escalations that are attached
to a policy.
■ pdlistpolicyescalationaction - Displays all the actions that are attached
to a policy.
■ pdlistuser - Displays all the users that are associated with a particular
PureDisk environment.
Set functions
■ pdsetagent - Changes and updates the details that are associated with an
existing agent.
■ pdsetbackuppolicy - Change the parameters of an existing backup policy.
■ pdsetdepartment - Changes and updates the details that are associated with
an existing department.
■ pdsetds - Changes and updates the details that are associated with an existing
data selection.
Command Line Interface options for PureDisk 375
General MAN page for PureDisk CLI
■ pdsetlocation - Changes and updates the details that are associated with an
existing location.
■ pdsetmaintenancepolicy - Change the parameters of an existing maintenance
policy.
■ pdsetmbgarbagecollectionpolicy - Change the parameters of an existing
metabase garbage collection policy.
■ pdsetperm - Sets the permissions for a user.
NetApp is a registered trademark of Network Appliance, Inc. in the U.S. and other
countries.
Novell and SUSE are registered trademarks of Novell, Inc., in the United States
and other countries.
OpenLDAP is a registered trademark of the OpenLDAP Foundation.
Red Hat and Enterprise Linux are registered trademarks of Red Hat, Inc., in the
United States and other countries.
UNIX is a registered trademark of The Open Group.
VMware, vSphere, and the VMware "boxes" logo and design are trademarks or
registered trademark of VMware, Inc., in the United States and other countries.
Third-party software may be recommended, distributed, embedded, or bundled
with this Symantec product. Such third-party software is licensed separately by
its copyright holder. All third-party copyrights associated with this product are
listed in the Third Party Legal Notices document, which is accessible from the
PureDisk Web UI.
Index
A C
Accessing a managed storage pool 303 central storage pool
ACL errors restore job statistic 224 disabling 302
ACL restore job statistic 224 enabling 301
activating a new PureDisk component 287 managing 303
Activating a server agent 92 testing connections 304
adding PureDisk components 280 clustering
authentication key 347, 349 administering a storage pool 310
Average restore rate restore job statistic 224 also see disaster recovery - clustered storage
pools 157
B configuration example 341
examining NICs for addressing 343
Backup job statistics 219
installing VCS 4.1 MP3 software 351
Backup speed backup job statistic 219
installing VCS 4.1 MP4 software 356
Backup time duration backup job statistic 219
removing addressing from NICs 346
Backup time duration PDDO backup job statistic 228
synchronizing passwords 347
Bytes deleted in source data selection replication job
VCS software requirements 341
statistic 227
configuration files
Bytes deleted on source backup job statistic 219
editing agent configuration files for large
Bytes modified in source data selection replication
backups 333
job statistic 227
editing ASCII files 326
Bytes modified on source backup job statistic 219
editing through Web UI 322
Bytes modified on target restore job statistic 224
pushing changes 325
Bytes new in source data selection replication job
updating agent configuration files 327
statistic 227
configuring
Bytes new on source backup job statistic 219
see reconfiguring PureDisk 321
Bytes new on target restore job statistic 224
configuring SPAR 201–202
Bytes not modified on source backup job statistic 219
copying a replication policy 68
Bytes received by agent restore job statistic 224
Bytes replicated replication job statistic 227
Bytes scanned during backup PDDO backup job D
statistic 228 dashboard reports 249
Bytes selected on source backup job statistic 219 data lock password
Bytes total restore job statistic 224 export to NetBackup 84
Bytes transferred backup job statistic 219 files tab 215
Bytes transferred replication job statistic 227 data mining reports 233
Bytes transferred to content router PDDO backup job data replication
statistic 228 copying replicated data 70
Bytes unmodified on target restore job statistic 224 managing replicated data selections 68
Bytes with errors restore job statistic 224 policies
Bytes with replication errors replication job copying and deleting 68
statistic 227 creating 61
380 Index