Está en la página 1de 114

EMC

VNX

Series
Release 7.1
Using SRDF

/A with VNX

P/N 300-013-438 Rev 01


EMC Corporation
Corporate Headquarters:
Hopkinton, MA 01748-9103
1-508-435-1000
www.EMC.com
Copyright 1998 - 2012 EMC Corporation. All rights reserved.
Published July 2012
EMC believes the information in this publication is accurate as of its publication date. The
information is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." EMC CORPORATION
MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO
THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED
WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an
applicable software license.
For the most up-to-date regulatory document for your product line, go to the Technical
Documentation and Advisories section on EMC Powerlink.
For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on
EMC.com.
All other trademarks used herein are the property of their respective owners.
Corporate Headquarters: Hopkinton, MA 01748-9103
2 Using SRDF/A with VNX 7.1
Contents
Preface.....................................................................................................7
Chapter 1: Introduction.........................................................................11
System requirements..................................................................................12
Restrictions..................................................................................................12
User interface choices.................................................................................14
Related information.....................................................................................15
Chapter 2: Concepts.............................................................................17
SRDF...........................................................................................................19
Communication between Control Stations.........................................19
SRDF and logical volumes.................................................................20
SRDF/A.......................................................................................................22
SRDF/A delta sets.......................................................................................23
Important resource considerations.....................................................24
Differences between SRDF/A and SRDF/S................................................25
Comparison of VNX for file high-availability and replication products.........26
Planning considerations..............................................................................28
Symmetrix system configuration........................................................29
Adding Symmetrix storage devices to the existing system
configuration..................................................................................29
VNX volume and Data Mover decisions (flexibility)............................30
VNX Data Mover configuration checklist............................................31
Consideration when using applications that require transactional
consistency...................................................................................32
Consideration when using applications that can switch to the
NFS copy from the R2 without a restart........................................33
Using SRDF/A with VNX 7.1 3
Upgrading the VNX SRDF environment.............................................34
SRDF/A task overview................................................................................34
Overview of the sample configuration.........................................................39
Chapter 3: Configuring.........................................................................41
Preinitialize the configuration......................................................................42
Preinitialize from the source (first) VNX..............................................43
Preinitialize from the destination (second) VNX.................................43
Verify the preinitialization....................................................................44
Initialize the configuration (active/passive)..................................................45
Initialize the source VNX....................................................................45
Initialize the destination VNX..............................................................49
Verify SRDF/A on the source VNX (active/passive)...........................53
Activate a failover (active/passive)...............................................................56
Prepare for a graceful failover............................................................57
Activate a failover from the destination VNX.......................................59
Verify SRDF/A after failover activation................................................63
Ensure access after failover...............................................................69
Restore the source VNX..............................................................................69
Prepare for the restore.......................................................................70
Restore from the destination..............................................................70
Chapter 4: Troubleshooting..................................................................77
EMC E-Lab Interoperability Navigator.........................................................78
Known problems and limitations..................................................................78
Retrieve information from log files......................................................78
Resolve initialization failures..............................................................79
Resolve activation failures..................................................................84
Resolve restore failures......................................................................87
Resolve Data Mover failure after failover activation............................95
Handle additional error situations.......................................................98
Error messages...........................................................................................98
EMC Training and Professional Services....................................................99
Appendix A: Portfolio of High-Availability Options...........................101
Glossary................................................................................................107
4 Using SRDF/A with VNX 7.1
Contents
Index.....................................................................................................111
Using SRDF/A with VNX 7.1 5
Contents
6 Using SRDF/A with VNX 7.1
Contents
Preface
As part of an effort to improve and enhance the performance and capabilities of its product
lines, EMC periodically releases revisions of its hardware and software. Therefore, some
functions described in this document may not be supported by all versions of the software
or hardware currently in use. For the most up-to-date information on product features, refer
to your product release notes.
If a product does not function properly or does not function as described in this document,
please contact your EMC representative.
Using SRDF/A with VNX 7.1 7
Special notice conventions
EMC uses the following conventions for special notices:
Note: Emphasizes content that is of exceptional importance or interest but does not relate to
personal injury or business/data loss.
Identifies content that warns of potential business or data loss.
CAUTION Indicates a hazardous situation which, if not avoided, could result in minor or
moderate injury.
Indicates a hazardous situation which, if not avoided, could result in death or serious injury.
DANGER Indicates a hazardous situation which, if not avoided, will result in death or serious
injury.
Where to get help
EMC support, product, and licensing information can be obtained as follows:
Product information For documentation, release notes, software updates, or for
information about EMC products, licensing, and service, go to the EMC Online Support
website (registration required) at http://Support.EMC.com.
Troubleshooting Go to the EMC Online Support website. After logging in, locate
the applicable Support by Product page.
Technical support For technical support and service requests, go to EMC Customer
Service on the EMC Online Support website. After logging in, locate the applicable
Support by Product page, and choose either Live Chat or Create a service request. To
open a service request through EMC Online Support, you must have a valid support
agreement. Contact your EMC sales representative for details about obtaining a valid
support agreement or with questions about your account.
Note: Do not request a specific support representative unless one has already been assigned to
your particular system problem.
8 Using SRDF/A with VNX 7.1
Preface
Your comments
Your suggestions will help us continue to improve the accuracy, organization, and overall
quality of the user publications.
Please send your opinion of this document to:
techpubcomments@EMC.com
Using SRDF/A with VNX 7.1 9
Preface
10 Using SRDF/A with VNX 7.1
Preface
1
Introduction
EMC Symmetrix Remote Data Facility/Asynchronous (SRDF/A) is an
extended-distance asynchronous replication facility. SRDF/A provides
dependent write consistency for host writes from a source VNX/Symmetrix
DMX system pair to a destination VNX/Symmetrix DMX system pair through
predetermined time cycles (delta sets) for maintaining a restartable,
point-in-time remote copy of data. Chapter 2 provides more details.
This document is part of the VNX documentation set and is intended for
system administrators responsible for installing and managing
high-availability storage configurations.Your local EMC Customer Support
Representative performs the Symmetrix storage system administrative
tasks, such as installation and configuration of the Symmetrix DMX and
SRDF volumes required to support SRDF/A, and is available to help ensure
proper SRDF/A functionality as needed.
Topics include:
x
System requirements on page 12
x
Restrictions on page 12
x
User interface choices on page 14
x
Related information on page 15
Using SRDF/A with VNX 7.1 11
System requirements
Table 1 on page 12 describes the EMC

VNX

series software, hardware, network, and


storage configurations.
Table 1. System requirements
x EMC Symmetrix

Enginuity

version 5670 or later microcode on the source and destination Symmetrix


DMX

systems with SRDF/A.


x VNX version 7.1 on the source and destination servers.
Software
x Remote adapter (RA) for interconnecting the Symmetrix DMX systems. The connection can be over
wide area channels using Fibre Channel (FC), Dense Wavelength Division Multiplexing (DWDM), or
IP.
x Production and standby Data Movers on the VNX systems to support the EMC SRDF

/A active/passive
configuration.
x Similar source and destination VNX models.
Hardware
IP data network for communication between Control Stations of source and destination VNX for File sys-
tems.
Network
Two attached VNX/Symmetrix DMX system pairs. Storage
Restrictions
x SRDF/A requires a Symmetrix SRDF/A license.
x SRDF/A does not work with Symmetrix 3xxx, 5xxx, or 8xxx versions. Only Symmetrix
DMX systems (5670 or later microcode) are supported.
Note: The Solutions Enabler Symmetrix SRDF family documentation, which includes the EMC
Symmetrix Remote Data Facility (SRDF) Product Guide, provides additional restrictions that apply
to the SRDF/A configuration with Symmetrix DMX. This documentation is available at
http://Support.EMC.com, the EMC Online Support website.
x The Symmetrix Enginuity version 5670 microcode supports only one SRDF/A group,
which can be dedicated to either an open host or a VNX, but not a mix of both. The 5671
release eases this restriction.
x SRDF/A source and destination sites support only one VNX/Symmetrix system pair.
x SRDF/A does not support partial failovers. When a failover occurs, all file systems
associated with SRDF-protected Data Movers fail over. To avoid failover issues, it is
critical that SRDF-protected Data Movers mount only the file systems consisting of SRDF
12 Using SRDF/A with VNX 7.1
Introduction
volumes mirrored at the destination. Local standard (STD) and business continuance
(BCV) volumes should have dedicated, locally protected Data Movers and SRDF volumes
should have dedicated, SRDF-protected Data Movers. Chapter 4 provides information
on potential failover and restore issues.
x IP Alias cannot be set up or used on the Control Stations when SRDF is configured for
disaster recovery purposes.
Management restrictions
x SRDF/A cannot be managed with the EMC Unisphere

software. SRDF/A can be


managed only with the CLI by using the /nas/sbin/nas_rdf/nas_cel commands.
x For sites with redundant Control Stations, all SRDF management commands, including
/nas/sbin/nas_rdf/nas_cel commands that perform initialization, activation, and restore
operations, must be run from CS0. Control Station 1 (CS1) must be powered off at
both sites before any activation or restore commands are run.
x Solutions Enabler Symmetrix SYMCLI action commands are not invoked on a VNX
device group by using the Control Station host component, laptop service processor,
or EMC Ionix

ControlCenter

. However, you can run informational SYMCLI


commands for the SRDF/A device group. The Solutions Enabler Symmetrix SRDF
family documentation on the EMC Online Support website provides more information.
x When you run the nas_rdf restore command, ensure that no command is accessing
the /nas/rdf/500 directory. NAS commands in rdfadmin environment may access the
/nas/rdf/500 directory. If a command accesses /nas/rdf/500 during restore, the restore
command will fail.
x To use fs_timefinder in the activated state by using the rdfadmin account, straight
Remote Standby Data Mover configuration such as slot 2 to slot 2, slot 3 to slot 3 is
required.
x When you create file systems or extend them, ensure that all remote replicas are of
the same type, either FC disks or ATA disks.
SnapSure checkpoints restrictions
x EMC SnapSure SavVol cannot be created on local storage if the production file
system (PFS) is mounted on a Data Mover configured with a remote standby. If you
plan to create checkpoints of a PFS that resides on an SRDF LUN, ensure that the
entire SnapSure SavVol with the checkpoints resides in the same pool of SRDF LUNs
used to create the PFS. If any part of the SavVol is stored on a local volume rather
than completely on the pool of SRDF LUNs, the checkpoints are not failed over, and
are recoverable in the event of a failover. Evaluate the use of checkpoints carefully.
x After an SRDF/A failover is activated, checkpoint scheduling is not supported until a
restore is performed.
Restrictions 13
Introduction
x If SnapSure checkpoints are used in the rdfadmin environment, the SavVol volume
can only be extended manually using the symm_std_rdf_tgt storage pool. If SavVol
fills to capacity, writes to the PFS continue, while the oldest checkpoint is deactivated.
x The NDMP automatic checkpoint create and delete feature will not work in the activated
state (rdfadmin account environment). Manual checkpoint create and delete will work.
VNX feature-specific restrictions
x Automatic File System Extension will not work in the activated state (rdfadmin account
environment). Manual extension will work.
x SRDF/A with Timefinder/FS NearCopy or FarCopy is currently available with the
request for price quotation (RPQ) process. Contact the local EMC Representation or
the local EMC Service Provider for more information.
x EMC VNX Replicator works with disaster recovery replication products such as
SRDF/Synchronous (SRDF/S) and SRDF/Asynchronous (SRDF/A) or EMC
MirrorView/Synchronous (MirrorView/S).You can run SRDF or MirrorView/S products
and VNX Replicator on the same data. However, if there is an SRDF or MirrorView/S
site failover, you cannot manage Replicator sessions on the SRDF or MirrorView/S
failover site. Existing Replicator sessions will continue to run on the failed over Data
Mover and data will still be replicated. On the primary site, you can continue to manage
your SRDF or MirrorView/S replication sessions after the restore.
x Mixed backend configuration with SRDF and EMC MirrorView/S is not supported. In
a mixed backend configuration, Symmetrix is the boot storage device, which prohibits
use of MirrorView/S. MirrorView/S requires VNX for block to be the boot storage
device.
x SRDF/Automated Replication (SRDF/AR) does not support SRDF/A devices.
x R1 EFD, R2 EFD and BCV EFD devices are supported through FAST Storage Group.
These devices must be added to the FAST Storage Group to be used on VNX for
File systems.
x SRDF provides limited support for VNX FileMover. Using VNX FileMover provides
more information.
x SRDF provides limited support for MPFS. Contact the local EMC sales organization
for information about using MPFS with SRDF.
EMC E-Lab Interoperability Navigator on page 78 provides information about product
interoperability. System requirements on page 12 identifies the basic hardware and
software requirements.
User interface choices
This document describes how to configure SRDF/A by using the command line interface
(CLI). You cannot use other VNX management applications to configure SRDF/A.
14 Using SRDF/A with VNX 7.1
Introduction
Related information
VNX disaster recovery that is SRDF/A related, but is beyond the scope of this document, is
included in:
x VNX Command Line Interface Reference for File
x Celerra Network Server Error Messages Guide
x Online VNX for File man pages
x Problem Resolution Roadmap for VNX
x Using VNX FileMover
x Using MirrorView/Synchronous with VNX for File for Disaster Recovery
x Using TimeFinder/FS, NearCopy, and FarCopy on VNX
x Using SRDF/S with VNX for Disaster Recovery
Other related EMC publications, include:
x Solutions Enabler Symmetrix SRDF Family CLI Product Guide
x Symmetrix Remote Data Facility (SRDF) Product Guide
x EMC Business Continuity for Oracle Database 11g Enabled by EMC Celerra using
DNFS and NFS Proven Solution Guide
EMC VNX documentation on the EMC Online Support website
The complete set of EMC VNX series customer publications is available on the EMC
Online Support website. To search for technical documentation, go to
http://Support.EMC.com. After logging in to the website, click the VNX Support by Product
page to locate information for the specific feature required.
VNX wizards
Unisphere software provides wizards for performing setup and configuration tasks. The
Unisphere online help provides more details on the wizards.
Related information 15
Introduction
16 Using SRDF/A with VNX 7.1
Introduction
2
Concepts
SRDF/A is an extended-distance asynchronous replication facility.
SRDF/A provides an economical and high-performance replication
configuration for business continuity, enabling greater service levels than
obtained with traditional asynchronous architectures. It is the optimal
extended-distance replication option when service-level requirements
dictate that economics and application performance are more critical than
zero data exposure.
Benefits of SRDF/A include:
x
Extended-distance data replication that supports longer distances than
Symmetrix Remote Data Facility/Synchronous (SRDF/S).
x
Potential performance increase over SRDF/S because source host
activity is decoupled from remote copy activity.
x
Efficient link utilization that results in lower link-bandwidth requirements
because less data is transferred and the data is handled fewer times.
By exploiting the locality of reference writes within an application, data
that is updated multiple times in the same cycle is sent across the
SRDF/A links only once, and does not have to be duplicated within the
global memory of the sending control unit to preserve data consistency.
The benefit of this approach might vary among applications.
x
Minimal data loss potential because the destination side lags behind
the source by minutes. Data could be out of date by twice the delta set
time period or longer, depending on the write activity, and how fast the
data can be transmitted across the link.
x
Failover and failback capability between the source and destination
sites. Destination volumes can be used if the source is unavailable.
x
Facilities to invoke failover and restore operations manually.
SRDF/A is based on the Symmetrix Remote Data Facility (SRDF)
technology for the EMC Symmetrix environment and is a part of the SRDF
replication software family that offers high-availability configurations.
SRDF/A recovery capability completely relies on the SRDF functionality.
Using SRDF/A with VNX 7.1 17
Topics include:
x
SRDF on page 19
x
SRDF/A on page 22
x
SRDF/A delta sets on page 23
x
Differences between SRDF/A and SRDF/S on page 25
x
Comparison of VNX for file high-availability and replication products
on page 26
x
Planning considerations on page 28
x
SRDF/A task overview on page 34
x
Overview of the sample configuration on page 39
18 Using SRDF/A with VNX 7.1
Concepts
SRDF
SRDF is a replication technology that allows two or more Symmetrix systems to maintain a
mirror of data at multiple, remote locations.
SRDF supports two types of configurations:
x
active/active Bidirectional configuration with two production sites, each acting as the
standby for the other. Each VNX has production and standby Data Movers. If one site
fails, the other site takes over and serves the clients of both sites.
x
active/passive Unidirectional setup where one VNX for file, with its attached storage
system, serves as the source (production) file server and another VNX for file, with its
attached storage, serves as the destination (backup). This configuration provides failover
capabilities in the event that the source site is unavailable.
Communication between Control Stations
In an SRDF configuration, the Control Stations associated with the source and destination
VNX for file can communicate by using the IP data network.
SRDF 19
Concepts
Figure 1 on page 20 shows a basic VNX-with-SRDF configuration that illustrates the means
of communication.
PS0 PS1 PS2 PS3 PS4 SMB0 SMB1 SB0
SB1
SB2
SB3
SB4
SB5
SB6
SB7
SB8
SB9
SB10
SB11
SB12
SB13
SB14
SB15
PS0 PS1 PS2 PS3 PS4 SMB0 SMB1 SB0
SB1
SB2
SB3
SB4
SB5
SB6
SB7
SB8
SB9
SB10
SB11
SB12
SB13
SB14
SB15
Source
R1
Target
R2
Site A source Site B target
VNX
host
VNX
host
Symmetrix Symmetrix
VNX-000033
IP
data network
connection
Figure 1. Basic VNX-with-SRDF configuration
SRDF and logical volumes
To implement the VNX with an SRDF configuration, the standard VNX for file configuration
is modified.
Each logical volume defined in the VNX volume database consists of two physical volumes:
one on the source Symmetrix and one on the destination Symmetrix system. For example,
a typical Symmetrix volume, logical volume 001, consists of 001-R1 (the primary, or source,
disk) and 001-R2 (the destination, or remote, disk).
The R2 devices mirror the data from the source R1 volume by using SRDF. The R2 devices
are transparent to the source VNX host and become write-accessible only when they are
activated through a manual failover operation. However, if a source (R1) volume fails during
normal operation, SRDF automatically uses the R2 volume instead of R1 and continues
normal operation.
20 Using SRDF/A with VNX 7.1
Concepts
Figure 2 on page 21 provides a simplified view of a sample logical volume configuration.
Note that Symmetrix volume IDs do not have to be identical (mapping 001 to 001 as shown
in this view).
PS0 PS1 PS2 PS3 PS4 SMB0 SMB1 SB0
SB1
SB2
SB3
SB4
SB5
SB6
SB7
SB8
SB9
SB10
SB11
SB12
SB13
SB14
SB15
PS0 PS1 PS2 PS3 PS4 SMB0 SMB1 SB0
SB1
SB2
SB3
SB4
SB5
SB6
SB7
SB8
SB9
SB10
SB11
SB12
SB13
SB14
SB15
Dedicated SRDF link
Source
R1
001-R1
Target
R2
001-R2
Site A source Site B target
VNX
host
VNX
host
Symmetrix Symmetrix
VNX-000034
Logical volume 001
Figure 2. VNX with SRDF logical volume
Note: In the VNX for file system setup phase, your local EMC Symmetrix system configuration
Representative configures the volumes on the attached Symmetrix system (source) and the remote
Symmetrix system (destination). In addition, SRDF software and EMC communications hardware are
installed to enable the Symmetrix systems to communicate with each other. Local EMC personnel can
ensure that the volumes are established on the dedicated SRDF link.
VNX support for SRDF includes:
x Complete disaster recovery without data loss The VNX for file supports SRDF/S, which
represents SRDF in synchronous mode. SRDF/S is a limited-distance replication facility
established between a pair of VNX/Symmetrix systems at a source site and a destination
site. SRDF/S provides a synchronized, realtime remote mirror of data at more than one
location.
To support a wide variety of business continuance requirements while maintaining a full
disaster recovery environment employing Symmetrix storage, you can also use SRDF/S
with the TimeFinder/FS NearCopy feature. TimeFinder/FS NearCopy enables you to
create and manage snapshots of a VNX for file file system on to dedicated Symmetrix
SRDF 21
Concepts
BCVs. To create the NearCopy snapshot, the source volumes can be destination (R2)
volumes, operating in SRDF/S mode. Using SRDF/S with VNX for Disaster Recovery
and Using TimeFinder/FS, NearCopy, and FarCopy with VNX provide more information.
x Extended-distance replication The VNX supports SRDF/A, which represents SRDF
in asynchronous mode. SRDF/A is an extended-distance replication facility established
between a pair of VNX for file/Symmetrix DMX systems at a source site and a destination
site. SRDF/A provides a restartable, point-in-time remote mirror of data lagging not far
behind the source.
VNX also supports the use of TimeFinder/FS FarCopy with SRDF in adaptive copy
write-pending mode, where you create the FarCopy snapshot by using R2 volumes as
the source volume.
Contact your EMC sales organization for more information about these and other available
configuration options.
SRDF/A
An SRDF/A configuration connects a local VNX/Symmetrix DMX system pair with a remote
VNX/Symmetrix DMX system pair. This connection can be made over longer distances than
the SRDF/Synchronous (SRDF/S) product. After the VNX is configured, users can continue
to access the SRDF-protected VNX file systems should the local (source) VNX, the Symmetrix
DMX system, or both become unavailable.
This document describes how to use SRDF/A with VNX in an active/passive configuration.
In an active/passive configuration, data is replicated between two attached Symmetrix DMX
systems connected through SRDF/A links. Normal mirroring between the Symmetrix DMX
systems is carried out, while the destination (target) VNX remains on standby. While
functioning as a standby, the destination VNX is powered up and its Control Station is fully
operational. The destination VNX configuration provides complete hardware redundancy
for the source VNX.
Note:
x For the purposes of SRDF/A configuration, it is assumed that the local volumes and SRDF-protected
volumes have their own, dedicated Data Movers. To avoid failover issues, SRDF-protected Data
Movers should contain only SRDF volumes, not local (STD, BCV) volumes. Also, ensure that the
file systems do not span multiple storage systems.
x The VNX continually monitors its internal hardware status and, if properly configured, initiates a
CallHome event to report any problems that could require attention by local EMC personnel.
22 Using SRDF/A with VNX 7.1
Concepts
SRDF/A delta sets
SRDF/A processes asynchronous-mode host writes from the source (R1) to the destination
(R2) by using predetermined cycles of operation called delta sets.
Delta-set logical cycles, which include capture, transmit or receive, and restore, provide
dependent write consistency. Dependent write consistency refers to the maintenance of a
consistent replica of data between the source and destination, achieved by processing and
preserving all writes to the destination in ordered (sequential) numbered sets.
The following steps summarize the data processing in the delta set cycles:
1. Capture cycle N: An SRDF/A delta set (active cycle N) begins on the source (R1) to
capture new writes, overwriting any duplicate tracks intended for transfer over the SRDF/A
link. This cycle is active for a predetermined time period by default, using a cycle time of
30 seconds. Your local EMC Customer Support Representative can help you configure
this on the Symmetrix DMX system.
2. Transmit cycle N-1: After the predetermined cycle time is reached, the delta-set data
moves to the transmit cycle N-1, during which the delta-set data collected on the source
is transmitted to the destination (R2) Symmetrix DMX system. This data is inactive during
the N-1 cycle. A new cycle N starts to collect new writes in preparation for the next
delta-set transfer.
3. Receive cycle N-1: During the N-1 cycle, the delta-set data is collected at the destination
(R2) Symmetrix DMX system. When all N-1 cycle data is successfully transmitted by the
source, and received by the destination, and the minimum cycle time elapses, the delta-set
data moves to the next cycle.
4. Restore cycle N-2: During this cycle, writes are restored to the R2 destination (marked
as device write pendings or destaged to disk). When all writes associated with delta set
N-2 are committed to the R2 destination, the R1 source and R2 destination are considered
to be in a consistent-pair state. A cycle switch occurs only after these conditions are met.
Figure 3 on page 24 illustrates how data is processed in delta set cycles.
A cycle switch occurs only after all these conditions are met:
1. The minimum cycle time elapses.
2. The N-1 cycle finishes transferring data, and the data is fully received at the destination
site.
3. The N-2 cycle finishes restoring data at the destination.
SRDF/A delta sets 23
Concepts
Note: The cycle time is elongated if the write transfer or destaging exceeds the set cycle time. A new
cycle cannot begin until all three conditions are met. It is possible a switch could take longer, for
example, 20 minutes or more, depending on how quickly the data travels across the link.
PS0 PS1 PS2 PS3 PS4 SMB0 SMB1 SB0
SB1
SB2
SB3
SB4
SB5
SB6
SB7
SB8
SB9
SB10
SB11
SB12
SB13
SB14
SB15
PS0 PS1 PS2 PS3 PS4 SMB0 SMB1 SB0
SB1
SB2
SB3
SB4
SB5
SB6
SB7
SB8
SB9
SB10
SB11
SB12
SB13
SB14
SB15
Site A source Site B target
VNX
host
VNX
host
Symmetrix Symmetrix
SRDF/A device pair
active sessions
SRDF/A delta set begins
Dedicated SRDF link
1. Capture writes
using default
cycle time of 30
sec (active)
2. Transmit writes
to R2 (inactive)
4. Writes restored
to R2 target device
(active)
3. Receive writes
on R2 (inactive)
VNX-000032
Source
R1
device
Target
R2
device
Cycle
N-2
Cycle
N-1
Cycle
N-1
Cycle
N
I/O
Figure 3. SRDF/A active/passive configuration featuring delta sets (cycles) to transfer data
Important resource considerations
x Over time, the SRDF/A facility might incur additional load due to increased levels of write
activity. An example is increased write activity introduced by new applications or growth
to existing applications.
x During periods of intense write activity, completion of the delta-set transfer could exceed
20 minutes, depending on how fast data can travel across the link.
x Depending on how often you see extended-length transfers (cycle times), you might need
to increase the bandwidth capacity between sites. Local EMC personnel can help you
determine the appropriate amount of bandwidth for your configuration. You can monitor
the SRDF/A average cycle time and the time R2 is behind R1 by running a SYMCLI query
command for your SRDF/A device group (for example, /nas/symcli/bin/symrdf -g
<device_group> query -rdfa. A sample device group is 1R1_3).
24 Using SRDF/A with VNX 7.1
Concepts
Differences between SRDF/A and SRDF/S
Table 2 on page 25 summarizes the key differences between SRDF/A and SRDF/S on VNX
systems with Symmetrix storage system.
Table 2. Summary of differences between SRDF/A and SRDF/S
SRDF/S SRDF/A
Works with Symmetrix 3xxx, 5xxx, 8xxx, or DMX
systems.
Requires Symmetrix DMX systems or later versions.
Complete disaster recovery configuration for systems
within relatively short distances.
Extended-distance replication facility.
Provides a complete disaster recovery configuration
without data loss.
Provides manual failover and failback of backend
storage systems and servers with minimal data loss.
Provides I/O-level consistency. Provides system-wide points of consistency.
Each write I/O to the source is synchronously repli-
cated to the destination Symmetrix system. The host
is acknowledged of the write after the destination
Symmetrix system receives and CRC-checks the
data.
Each write I/O to the source is immediately commit-
ted on the source image only and sent to the desti-
nation in predefined timed cycles (delta sets). The
host is acknowledged of the write before the destina-
tion Symmetrix system receives the write.
Each I/O is sent to the destination Symmetrix system
before an acknowledgment is sent to the host.
Locality of reference provides more efficient use of
network bandwidth. With data sent as cycles of de-
pendent write-consistent data, less data needs to
transfer to the destination. If the same track is written
to more than once within an active set, SRDF/A
sends the update over the link once.
Supports active/passive and active/active configura-
tions.
Currently supports active/passive configurations only.
Can be implemented with TimeFinder/FS NearCopy
and FarCopy (adaptive copy disk or disk-pending
mode).
Not implemented with configurations employing
TimeFinder/FS NearCopy or FarCopy. NearCopy is
not applicable because SRDF/A volumes cannot be
source volumes for TimeFinder/FS NearCopy file
system snapshots. In a configuration employing
FarCopy, adaptive copy disk mode or disk-pending
mode can be used instead of SRDF/A.
Appendix A provides illustrations of various business continuance and high-availability
configurations that EMC supports with the VNX. Using SRDF/S with VNX for Disaster
Recovery provides information about SRDF/S, which is the synchronous version of SRDF.
Differences between SRDF/A and SRDF/S 25
Concepts
Comparison of VNX for file high-availability and replication products
Table 3 on page 26 lists and compares the different VNX for file product options for disaster
recovery, high availability, and file system replication or copying. The local EMC sales
organization can provide information about the other configuration options.
Note: EMC recommends that all parts of a VNX file system use the same type of disk storage and be
stored on a single storage system. A file system spanning more than one storage system increases
the chance of data loss or data unavailability. Managing Volumes and File Systems with VNX Automatic
Volume Management and Managing Volumes and File Systems for VNX Manually provide more
information.
Table 3. Comparison of VNX for file high-availability and replication products
Restrictions Description Storage platform Product
Cannot be used in the same
VNX for file configuration as
SRDF. Cannot be used with
Symmetrix-based products
such as TimeFinder/FS.
Cannot be used with the Au-
tomatic File System Exten-
sion feature.
Performs LUN (volume)
cloning, not file system
cloning; remote volumes are
accessible only after a
failover.
With MirrorView/S, the VNX
for file does not see the mir-
rored LUNs at the destination
site until a failover is activat-
ed. This is different from
SRDF, in which the remote
mirrors (R2 volumes) of the
Control Station LUNs are
visible to the Data Movers.
Using attached VNX for file
and VNX for block backend
pairs, performs synchronized,
remote mirroring to provide
full disaster recovery without
data loss at a limited dis-
tance. Using MirrorView/Syn-
chronous with VNX for File
for Disaster Recovery pro-
vides more information about
MirrorView/S. RPQ process
controls the access to this
document.
VNX for block only, CX series
(CX700/600/500/400, CX3-
40, or CX3-80)
MirrorView/Synchronous
Note: This product is current-
ly available through the RPQ
process. Contact your local
EMC Sales Representative
or EMC Service Provider for
more information.
Cannot be used in the same
VNX for file configuration as
MirrorView/S. Cannot be
used with the Automatic File
System Extension feature.
Performs volume cloning, not
file system cloning; remote
volumes are only accessible
after a failover.
Using attached VNX for file
and Symmetrix pairs, per-
forms synchronized, remote
replication to provide full dis-
aster recovery without data
loss at a limited distance.
Using SRDF/S with VNX for
Disaster Recoveryprovides
more information about
SRDF/S.
Symmetrix only (3xxx, 5xxx,
8xxx, or DMX series)
SRDF/Synchronous
26 Using SRDF/A with VNX 7.1
Concepts
Table 3. Comparison of VNX for file high-availability and replication
products (continued)
Restrictions Description Storage platform Product
Cannot be used in the same
VNX for file configuration as
MirrorView/S. Cannot be
used with the Automatic File
System Extension feature.
Performs volume cloning, not
file system cloning; remote
volumes are only accessible
after a failover.
Using attached VNX for file
and Symmetrix pairs, per-
forms asynchronous, point-
in-time replication at an ex-
tended distance. This techni-
cal module provides more in-
formation about SRDF/A.
Symmetrix only, DMX series SRDF/Asynchronous
A single file system should
occupy an entire STD vol-
ume. TimeFinder/FS does
not perform volume cloning.
Using a business continu-
ance configuration with Sym-
metrix business continuance
volumes (BCVs), provides
local file system cloning.
Symmetrix only TimeFinder/FS
These products do not per-
form volume cloning.
NearCopy is limited to 200
km. FarCopy supports extend-
ed distances.
NearCopy relies on SRDF/S
and FarCopy relies on SRDF
adaptive copy disk or disk-
pending mode to manage file
system snapshots.
TimeFinder/FS NearCopy
and FarCopy do not work
with Automatic File System
Extension.
Using a business continu-
ance configuration with Sym-
metrix BCVs, provides re-
mote file system cloning,
creating a point-in-time copy
(snapshot) of a VNX produc-
tion file system.
A remote file system can be
mounted and used during
normal operation (that is, it is
generally accessible).
Using TimeFinder/FS,
NearCopy, and FarCopy on
VNX provides more informa-
tion about TimeFinder/FS
products.
Symmetrix only TimeFinder/FS NearCopy
and FarCopy
Comparison of VNX for file high-availability and replication products 27
Concepts
Table 3. Comparison of VNX for file high-availability and replication
products (continued)
Restrictions Description Storage platform Product
For TimeFinder/FS,
NearCopy, or FarCopy, a
business continuance volume
(BCV) cannot be a source or
a destination file system for
replication. You can replicate
the underlying source file
system, but you cannot repli-
cate the BCV.
You cannot use the
TimeFinder/FS -Restore op-
tion for a replicated source
file system. Replication is
unaware of any changes be-
cause these changes occur
at the volume level.
Produces a read-only, point-
in-time copy of a source file
system and periodically up-
dates this copy, making it
consistent with the source file
system. The read-only copy
can be used by a Data Mover
in the same VNX cabinet, or
by a Data Mover at a remote
site for content distribution,
backup, and application test-
ing. Using VNX Replicator
provides more information.
VNX for file-supported stor-
age (one Symmetrix or VNX
for block pair)
VNX Replicator
Not intended to be a mirror,
disaster recovery, or high-
availability tool. Because it is
partially derived from realtime
PFS data, a checkpoint could
become inaccessible (not
readable) if the PFS be-
comes inaccessible. Only
checkpoints and a PFS
saved to a tape or to an alter-
nate storage location can be
used to provide disaster re-
covery.
In version 5.6, SnapSure
supports 112 checkpoints, 96
in 5.5, 64 in 5.4, and 32 prior
to 5.4.
On a VNX system, provides
read-only, point-in-time logi-
cal copies, also known as
checkpoints, of a production
file system.
VNX-supported storage SnapSure
Planning considerations
Before configuring SRDF/A with the VNX, consider the following planning information:
x
Symmetrix system configuration on page 29
x
VNX volume and Data Mover decisions (flexibility) on page 30
x
VNX Data Mover configuration checklist on page 31
28 Using SRDF/A with VNX 7.1
Concepts
x
Consideration when using applications that require transactional consistency on page
32
x
Consideration when using applications that can switch to the NFS copy from the R2
without a restart on page 33
x
Upgrading the VNX SRDF environment on page 34
Note: Examine the configuration in terms of local volumes against the intended SRDF-protected
volumes. Examples are STD and BCV, not mirrored at the destination. Because SRDF/A does not
support partial failovers, and all file systems associated with the SRDF-protected Data Movers are
eligible for failover, ensure that different Data Movers support local volumes against SRDF volumes.
Associate the local volumes with locally protected Data Movers, and associate the SRDF volumes
with dedicated SRDF-protected Data Movers. To avoid SRDF failover issues, do not mount local
volumes on a Data Mover that also mounts SRDF volumes. Chapter 4 provides a description of the
failover and restore issues for SRDF/A.
Symmetrix system configuration
The SRDF/A configuration setup tasks assume that the volumes on Symmetrix DMX systems
are configured in conformance with SRDF requirements, and the SRDF link is operational,
and in asynchronous mode.Your local EMC Customer Support Representative can configure
the volumes on the source and destination Symmetrix systems, and install SRDF software
and communications hardware to enable the Symmetrix systems to communicate with each
other.
The Solutions Enabler Symmetrix SRDF family documentation on the EMC Online Support
website, which includes the Symmetrix Remote Data Facility (SRDF) Product Guide, provides
details about SRDF requirements for Symmetrix systems.
Adding Symmetrix storage devices to the existing system configuration
After initial system configuration, if you want to add Symmetrix storage devices to the
configuration, use the following procedure:
1. Run the server_devconfig -create -scsi -all command to search for newly added Symmetrix
devices and create new logical disk volumes.
2. Depending on the SRDF configuration, run the /nas/sbin/nas_rdf -init command on the
source and target Control Stations in the following manner:

If the SRDF configuration is active/passive, run the /nas/sbin/nas_rdf -init command


first on the source and then on the target.

If the SRDF configuration is active/active, run the /nas/sbin/nas_rdf -init command


first on the source Control Station, then on the target Control Station and then again
on the source Control Station.
Planning considerations 29
Concepts
In the active/active configuration, each Symmetrix system is partitioned into primary and
remote volumes. There are two communication links, each connecting the primary volumes
with their remote counterparts on the remote Symmetrix system. Because of this, when you
run the /nas/sbin/nas_rdf -init command on the source Control Station for the first time, the
device configuration database and R1 device groups on the primary side are updated, but
the device configuration database and R2 device groups on the target side are not updated.
When you run the /nas/sbin/nas_rdf -init command on the target Control Station, the device
configuration database for the other direction is updated. The R2 device groups are also
updated based on the device configuration database of the source side. The R2 device
groups on the source Symmetrix are updated based on the target side device configuration
database when you run the /nas/sbin/nas_rdf -init command for the third time.
VNX volume and Data Mover decisions (flexibility)
A typical SRDF/A VNX active/passive configuration provides a full backup of the source site,
Data Mover for Data Mover. However, you can choose which volumes and Data Movers to
protect with remote mirroring. For example, you can remotely mirror some volumes and
Data Movers while others are only locally protected. You do not have to remotely protect all
volumes and Data Movers.
When planning the Data Mover configuration:
x For every source (production) Data Mover that you choose to protect with a remote
SRDF/A standby Data Mover, you must provide a dedicated standby Data Mover at the
destination site. There must be a one-to-one relationship between a source Data Mover
that you choose to protect, and a dedicated remote standby Data Mover at the destination
site.
x If the source Data Mover with a remote SRDF/A standby Data Mover also has a local
standby Data Mover, then that local standby must have a remote SRDF/A standby Data
Mover at the destination site. This prevents issues with failover.
x An SRDF/A standby Data Mover at the destination can be paired only with one source
Data Mover.
x The network configuration of the SRDF/A standby Data Mover must be a superset of the
network configuration of the source Data Mover.
x The network configuration of the SRDF/A standby Data Mover must be able to access
the R2 data volumes corresponding to the primary source Data Mover.
x Local Data Movers can be configured to have a local standby Data Mover, separate from
the SRDF-protected Data Movers.
Data Mover configurations possible at the local and remote locations in active/passive
SRDF/A environments include:
x On the local VNX:

Local production Data Mover paired with SRDF/A remote standby Data Mover
30 Using SRDF/A with VNX 7.1
Concepts

Local standby Data Mover paired with SRDF/A remote standby Data Mover

Local production Data Mover (non-SRDF/A)

Local standby Data Mover (non-SRDF/A)


x On the remote (destination) VNX:

SRDF/A standby Data Mover

Local production Data Mover (non-SRDF/A)

Local standby Data Mover (non-SRDF/A)


VNX Data Mover configuration checklist
Before performing any VNX procedures to establish SRDF/A, to ensure proper Data Mover
configuration:
x List which Data Movers to designate as SRDF/A primary and SRDF/A standby Data
Movers. This is a one-to-one failover relationship. A Data Mover can be an SRDF/A
standby for only one primary Data Mover, and an SRDF/A primary Data Mover can have
only one SRDF/A standby. In the initialization procedure, you designate and assign these
failover relationships. Note that by default, Data Movers are referred to as servers and
are named server_n, starting with server_2.
x Do not attempt to configure a local Data Mover as an SRDF/A standby if that Data Mover
is currently configured as a local standby. For an NS series gateway server active/passive
configuration with two Data Movers established for high availability, remove the default
local standby status of the destination VNX servers local Data Mover (for example,
server_3) before configuring that Data Mover as an SRDF/A standby. Configure the Data
Mover as an SRDF/A standby during the initialization process on the destination VNX
server, for example:
server_standby server_2 -delete mover=server_3
server_setup server_3 -type nas
In the CNS-14 (14-Data Mover) cabinet configuration shown in this technical module,
there is no default local standby, so you can proceed with an SRDF/A standby assignment
during the initialization process. Initialize the destination VNX on page 49 provides more
information about initialization:

Ensure that the Data Movers on the source and destination VNX systems are of a
similar model type, that is, the local SRDF/A Data Mover and its corresponding remote
standby Data Mover should be the same model or a supported superset. In addition,
the source and destination cabinets must be a similar model type. For example, they
should both be NS series gateway cabinets or CNS-14 cabinets, but not a mix of NS
series gateway and CNS-14 cabinets.
Planning considerations 31
Concepts

Consider IP data-network connectivity issues when planning the Data Mover


assignments.

Ensure that network interfaces for the SRDF/A primary and SRDF/A standby Data
Movers are identical, and the same set of network clients can access the SRDF/A
primary and the SRDF/A standby Data Movers.

Ensure that each local standby Data Mover, providing local standby coverage for an
SRDF-protected Data Mover, has a corresponding remote SRDF/A standby. This is
a requirement for active/passive configurations.

Evaluate the destination sites infrastructure: Subnet addresses as well as availability


of NIS/DNS servers in the correct UNIX domain, WINS/PDC/BDC/DC in the correct
Windows domain, and NTmigrate or usermapper hosts. The CIFS environment requires
more preparation to set up an SRDF configuration, due to higher demands on its
infrastructure than the NFS environment. For example, authentication is handled by
the infrastructure against client OS. For the CIFS environment, perform mappings
between usernames/groups and UIDs/GIDs.
Note: If the VNX environment is configured with SRDF protection, and you plan to create checkpoints
of a production file system that resides on an SRDF volume, ensure that the entire SnapSure volume
(the SavVol, which stores the checkpoints) resides in the same pool of SRDF volumes used to create
the production file system. Otherwise, if any part of the SavVol is stored on a local standard (STD)
volume rather than on an SRDF volume, the checkpoints are not failed over and not recoverable in
the SRDF-failback process.
Consideration when using applications that require transactional
consistency
When using applications such as Oracle, which require transactional consistency, failure to
use the forcedirectio NFS client mount option could result in incorrect data on SRDF or
TimeFinder copies.
The use of client-side NFS write buffering can result in missing data in the point-in-time
copies made by SRDF or TimeFinder. To ensure that any point-in-time copies contain
consistent data, ensure that you use the correct client-side NFS mount options to disable
NFS client-side buffering and caching for any application that requires transactional
consistency, such as Oracle or any other DBMS. The EMC Business Continuity for Oracle
Database 11g Enabled by EMC Celerra using DNFS and NFS Proven Solution Guide,
available on http://powerlink.emc.com, provides a summary of all best practice NFS mount
options for Oracle. In addition to the NFS mount options discussed in this document, Solaris
and HP-UX users should include the forcedirectio mount option for Oracle data volumes
to avoid write buffering. Do not use forcedirectio on NFS volumes that contain executables
or other non-DBMS data. Linux users should include the init.ora option filesystemio_options
= DIRECTIO to avoid NFS client-side write buffering.
32 Using SRDF/A with VNX 7.1
Concepts
Consideration when using applications that can switch to the NFS copy
from the R2 without a restart
Applications such as Oracle can switch to the NFS copy from the R2 without a restart.
Because of this, crash recovery does not happen, and data could become corrupt due to
missing dependent writes. Some of the reasons why this happens are:
x
Upon failover, Site B or Site C Data Movers will go online using the same IP and MAC
address as the R1.
x
NFS mount options hard and nointr are recommended for Oracle and other such
applications. These two options cause an NFS client to hold IO and wait indefinitely if
connectivity is lost to the NFS server.
x
If the Site A and Site B/Site C VNX gateway systems are connected to the same network
segment, NFS clients will automatically find the R2 server without an application restart.
x
In an unplanned failover, Site B/Site C could be at an earlier point-in-time than the Site
A. For example, the SRDF link was not working first, and Site A still had write I/Os. Then,
failover was done without propagating the data from Site A due to the unavailable SRDF
link.
To resolve this issue, a warning message appears, which reminds users to restart their
applications. This warning message appears if the RDF link is partitioned during the SRDF
activation, which could mean Site B/Site C is behind Site A.
The warning message in nas_rdf activate when RDF link is in a partitioned state for possible
data corruption:
!!!WARNING!!!
Some or all devices in the SRDF device group are in a partitioned state!
If you continue with the failover operation, it may result in data loss!
EMC recommends that you:
Fix the partitioned state and ensure that the device group is synchronized,
or is in a consistent state before continuing with the failover operation.
Or
In a true Disaster Recovery scenario, where siteA is unavailable and
synchronizing operation is not possible, continue with the failover operation
and resume or restart your application by using the destination data.
If the partitioned state is in a recovery leg of 3 sites configuration,
this message can be ignored, but it still needs to be solved in order to
keep the 3 sites configuration effective.
Do you wish to continue? [yes or no]:
Mitigating factors for this issue are:
Planning considerations 33
Concepts
x
It is unusual for the customer to use the same layer-2 network segment for Site A and
Site B/Site C. However, in a SRDF/Star environment, it may be more commonplace
between Site A and Site B.
x
Non-Transactional, Generic write a file, read a file style NFS access should be minimally
exposed.
Upgrading the VNX SRDF environment
Local EMC personnel can help you determine the upgrade option that best suits your
environment, and perform the procedures to upgrade the VNX systems in an SRDF
environment to version 5.6 software for SRDF/A.
Because you have remotely protected data, the procedure enables you to restore from the
destination volumes as a backout option, if needed.
To upgrade SRDF/A (or SRDF/S):
1. Halt all source Data Movers and shut down the Control Station.
2. Ensure that the source and destination Symmetrix DMX VNX volumes are synchronized.
3. Halt the SRDF links to ensure that the appropriate device groups are suspended.
4. Restart the source Control Station and Data Movers.
5. Perform the upgrade on the source site.
6. Test or verify the upgrade.
7. Resume the SRDF links. For SRDF/A, ensure that the devices in the appropriate device
groups are in a Consistent state.
You can now upgrade the destination by using steps 1 through 7. You must upgrade the
destination VNX immediately after you upgrade the source VNX.
Note: The appropriate device group state for SRDF/A is Consistent. If it is not, consult the local EMC
personnel or local EMC Service Provider to ensure that the SRDF/A device groups are established
and that the devices in those groups are in a Consistent state.
SRDF/A task overview
Table 4 on page 35 provides an overview of the basic SRDF/A tasks to establish
active/passive SRDF/A on a source and destination VNX and their associated commands.
When the activate and restore tasks are run, automatic, internal SRDF health checks are
performed before activating a failover and before restoring source and destination VNX
34 Using SRDF/A with VNX 7.1
Concepts
systems, respectively. Adding the nocheck option to the activate and restore commands
allows you to skip this health check. For example:
# /nas/sbin/nas_rdf activate nocheck
or
# /nas/sbin/nas_rdf restore -nocheck
Table 5 on page 39 provides details of the health checks run by each command. Prior to
performing any of these tasks, review the requirements summarized in Symmetrix system
configuration on page 29 to ensure that the Symmetrix DMX systems and VNX Data Movers
are configured correctly. In general, the local EMC Customer Support Representative is
available to help you set up and manage the SRDF/A configuration. Chapter 4 describes
how to gather information and resolve problems associated with SRDF/A configurations.
Table 4. SRDF/A task overview
Description Command used Task
x
Establishes trusted communication be-
tween source and destination VNX sys-
tems as a prerequisite to SRDF/A initial-
ization. Must be performed on both VNX
systems, using the same passphrase
(615 characters).
x
The nas_cel command replaces one of
the nas_rdf -init commands from the
previous versions.
From each VNX, using nasadmin, log in
as root:
nas_cel-create<cel_name>
-ip<ip>-passphrase
<passphrase>
Preinitialize the relationship be-
tween the source and destination
VNX.
SRDF/A task overview 35
Concepts
Table 4. SRDF/A task overview (continued)
Description Command used Task
x
If all SRDF volumes are in a Consistent
state, enables the designated destination
VNX to provide full file system access
and functionality if a site failure occurs.
x
Configures the Control Station to use
SRDF/A.
x
Identifies the remote destination VNX
paired with the source VNX.
x
Identifies the volume mapping on the
Symmetrix DMX system, which maps R1
volumes to their R2 counterparts.
x
Establishes the Data Mover relationships
from the production Data Mover to the
standby.
x
Runs the SRDF session state check be-
fore initialization.
x
Runs the SRDF standby Data Mover
configuration check and the Symmetrix
device state check after initialization.
x
Runs the device group configuration
check and the Data Mover mirrored de-
vice accessibility check for both R1 and
R2 device groups after initialization.
From source and destination, as root:
/nas/sbin/nas_rdf
-init
Initialize an SRDF/A relationship
between attached VNX/Sym-
metrix pairs.
36 Using SRDF/A with VNX 7.1
Concepts
Table 4. SRDF/A task overview (continued)
Description Command used Task
x
Performs a test failover or manual
failover. For example, a source VNX at-
tached to a source Symmetrix system
becomes unavailable. After the failover,
users have access to the same file sys-
tems using the same network addresses
as they did on the source, provided they
have network access to the remote VNX
site.
x
Sets each R1 volume on the source VNX
(the one failing over) as read-only. This
only occurs if the source Symmetrix sys-
tem is available.
x
Sets each R2 volume on the remote
Symmetrix as read/write.
x
Enables each SRDF/A standby Data
Mover on the remote Symmetrix to ac-
quire the following characteristics of its
source counterpart:
x
Network identity: IP and MAC ad-
dresses of all network interface
cards (NICs) in the failed Data
Mover.
x
Service identity: Network File Sys-
tem/ Common Internet File Service
(NFS/CIFS) characteristics of the
exported file system controlled by
the failed Data Mover.
x
Runs the SRDF standby Data Mover
configuration check, the SRDF session
state check, and the Symmetrix device
state check before activating a failover.
x
Runs the device group configuration
check and the Data Mover mirrored de-
vice accessibility check for R2 device
groups before activating a failover.
From destination, as root:
/nas/sbin/nas_rdf
-activate
Activate an SRDF/A failover from
a source VNX to a destination
VNX.
SRDF/A task overview 37
Concepts
Table 4. SRDF/A task overview (continued)
Description Command used Task
x
Typically scheduled by and performed
under the guidance of your local EMC
Customer Support Representative to
ensure continuity between the Symmetrix
systems. Restoration of a source VNX
involves a complete check of the Sym-
metrix system and SRDF/A and verifica-
tion of full connectivity to the restored file
systems on the source VNX.
x
Copies data from R2 volumes to the
corresponding R1 volume on the source
Symmetrix system.
x
Restarts SRDF standby Data Movers into
standby mode.
x
Write-disables R2 volumes from the Data
Movers.
x
Synchronizes R2 to R1.
x
Resumes mirroring of the R1 devices.
x
Restarts each Data Mover on the source
VNX and reacquires the IP addresses
and file system control from the SRDF
standby Data Movers.
x
Runs the SRDF session state check, the
Symmetrix device state check, and the
SRDF restored state check before
restoring a destination VNX.
x
Runs the device group configuration
check and the Data Mover mirrored de-
vice accessibility check for R2 device
groups before restoring a destination
VNX.
x
Runs the Symmetrix device state check
before restoring a source VNX.
x
Runs the device group configuration
check and the Data Mover mirrored de-
vice accessibility check for R1 device
groups before restoring a source VNX.
From destination, as root:
/nas/sbin/nas_rdf
-restore
Restore a source VNX after a
failover.
38 Using SRDF/A with VNX 7.1
Concepts
Table 5. Health check details
Description Health check
Lists all slot IDs of SRDF source Data Movers on the source
side, and compares the list with the list of all slot IDs of SRDF
standby Data Movers on the destination side. If a slot from the
source side is not an RDF standby Data Mover on the destina-
tion side, a warning message with the slot ID is displayed.
SRDF standby Data Mover
configuration check
Checks if all devices are synchronized. If they are not synchro-
nized, and if they are in various states, a warning message is
displayed. If none of them is synchronized, but all of them are
in the same state, for example, all of them are consistent or
partitioned, and so on, no warning message is displayed. When
the nas_rdf init command is run, if all the devices are parti-
tioned, a warning message is displayed.
SRDF session state check
Lists all mirrored devices of an RDF group on backend (symrdf
list) and compares the list with the list of devices in a device
group created by using the symdg (symrdf g <device group>
query) command. If there is a missing device in the device
group, a warning message is displayed.
Device group configuration
check
Lists all devices in a device group, and compares the list with
the list of devices obtained by using the server_devconfig
probe scsi all command. If there is a device in the device
group that is not present in the output of the probe command,
a warning message is displayed. Some of control volumes do
not have TID/LUN depending on their configuration, and these
are ignored in the check.
Data Mover mirrored device
accessibility check
Checks if there is a degraded or failed state device on the back
end by using the symdev list service_state notnormal com-
mand. If there is a degraded or failed state device, a warning
message is displayed. All degraded or failed devices on the
Symmetrix, including the devices that are not used for VNX,
are listed in the message.
Symmetrix device state
check
Checks and displays an error message if the source volumes
are R1 and ReadWrite, and if the destination volumes are R2
and ReadOnly. It does this by using the symdg show command.
SRDF restored state check
Overview of the sample configuration
To illustrate these steps using a sample configuration, they are performed on the VNX
systems as follows:
x cs100_src serves as the source VNX. It resides in the production data center.
Overview of the sample configuration 39
Concepts
x cs110_dst serves as the destination VNX. It resides in a remote disaster-recovery data
center.
Note: In the example used for illustrating the configuration steps, cs100_src serves as the source VNX
that resides in the production data center, and cs110_dst serves as the destination VNX that resides
in a remote disaster-recovery data center.
40 Using SRDF/A with VNX 7.1
Concepts
3
Configuring
The tasks to configure SRDF/A are:
x
Preinitialize the configuration on page 42
x
Initialize the configuration (active/passive) on page 45
x
Activate a failover (active/passive) on page 56
x
Restore the source VNX on page 69
Using SRDF/A with VNX 7.1 41
Preinitialize the configuration
As part of establishing an SRDF/A configuration, the local EMC Service Provider preinitializes
the VNX systems. Preinitialization establishes a trusted communication between the source
and destination VNX systems.
Prerequisites
x Preinitialization must be performed on both the VNX systems.
x The source and destination Control Station system times must be within 10 minutes
of each other.
x Preinitialization must be performed by using the same 615 character passphrase
for both the VNX systems. An example is nasadmin.
x To preinitialize, the user must log in to the VNX as nasadmin.
x The nas_cel command must be run with the -create option as nasadmin on both the
VNX systems. Systems using pre-5.6 versions of the software used the nas_rdf -init
command for preinitialization. Versions 5.6 and later use the nas_cel -create command.
x The preinitialization tasks are performed only once, after which the servers become
ready for the SRDF/A initialization procedures.
x The VNX systems can be set up and in production prior to initialization.
x Preinitialization is a prerequisite to the SRDF/A initialization.
You can preinitialize the SRDF/A configuration by using IPv4 or IPv6 IP addresses. While
doing this, make sure that the corresponding destination Control Station uses the same
IPv4/IPv6 network protocol as the source Control Station. An IPv4 Control Station cannot
connect to an IPv6 Control Station. Similarly, an IPv6 Control Station cannot connect to
an IPv4 Control Station. The Control Stations on both sides must use the same version
of the protocol.
Verify the preinitialization once after preinitializing from the source side and then after
preinitializing from the destination side.
Note: Before performing any of the following tasks, review the requirements summarized in Planning
considerations on page 28 to ensure that the Symmetrix systems and VNX Data Movers are
configured correctly. In general, the local EMC Customer Support Representative is available to
help you set up and manage any aspect of the SRDF/A configuration.
To preinitialize the configuration:
1. Preinitialize from the source (first) VNX on page 43
2. Preinitialize from the destination (second) VNX on page 43
3. Verify the preinitialization on page 44
42 Using SRDF/A with VNX 7.1
Configuring
Preinitialize from the source (first) VNX
Action Step
Log in to the source VNX (cs100_src) as nasadmin. 1.
Preinitialize the connection from the source (first) VNX to the destination (second) VNX in a VNX/Symmetrix
configuration for SRDF/A by using this command syntax:
2.
$ nas_cel -create <cel_name> -ip <ip> -passphrase <passphrase>
where:
<cel_name> = name of the destination VNX
<ip> = IP address of the destination Control Station in slot 0
<passphrase> = 615 character password
Example:
To preinitialize the connection from cs100_src to cs110_dst with IP address 192.168.97.141 and passphrase
nasadmin, type:
$ nas_cel -create cs110_dst -ip 192.168.97.141 -passphrase nasadmin
Output:
operation in progress (not interruptible)...
id = 1
name = cs110_dst
owner = 0
device =
channel =
net_path = 192.168.97.141
celerra_id = 0001901005570354
passphrase = nasadmin
Exit by typing: 3.
$ exit
Output:
exit
Preinitialize from the destination (second) VNX
Action Step
Log in to the destination VNX (cs100_dst) as nasadmin. 1.
Preinitialize the configuration 43
Configuring
Action Step
Preinitialize the connection from the destination (second) VNX to the source (first) VNX in a VNX/Symmetrix
configuration for SRDF/A by using this command syntax:
2.
$ nas_cel -create <cel_name> -ip <ip> -passphrase <passphrase>
where:
<cel_name> = name of the source VNX
<ip> = IP address of the remote Control Station in slot 0
<passphrase> = 615 character password
Example:
To preinitialize the connection from cs110_dst to cs100_src with IP address 192.168.97.140 and passphrase
nasadmin, type:
$ nas_cel -create cs100_src -ip 192.168.97.140 -passphrase nasadmin
operation in progress (not interruptible)...
id = 0
name = cs100_src
owner = 0
device =
channel =
net_path = 192.168.97.140
celerra_id = 000190100582034D
passphrase = nasadmin
Exit by typing: 3.
$ exit
Output:
exit
Verify the preinitialization
Action Step
Log in to the source VNX (cs100_src) as nasadmin. 1.
Verify preinitialization of cs100_src and cs110_dst by typing: 2.
$ nas_cel -list
Output:
id name owner mount_dev channel net_path CMU
0 cs100_src 0 192.168.97.140 000190100582034D
1 cs110_dst 0 192.168.97.141 0001901005570354
44 Using SRDF/A with VNX 7.1
Configuring
Note:
x The id is always 0 for the system from which you are running the command.
x You do not have to run nas_cel -list as root (or nas_cel -info). You must run nas_cel as root for
create, modify, update, or delete operations.
Initialize the configuration (active/passive)
Initializing an active/passive configuration establishes one VNX to serve as the source
(production) file server and another to serve as the destination.
Prerequisites
x VNX software must be installed on the source and destination VNX systems.
x SRDF/A link must be operational between the source and destination VNX systems.
x The requirements summarized in Symmetrix system configuration on page 29 must
be met to ensure that the Symmetrix systems and VNX Data Movers are configured
correctly.
x All SRDF/A volumes must be in a consistent state.Your local EMC Customer Support
Representative can help ensure volume consistency.
To initialize an active/passive configuration:
x Initialize the source VNX on page 45
x Initialize the destination VNX on page 49
x Verify SRDF/A on the source VNX (active/passive) on page 53
Initialize the source VNX
In an active/passive configuration, establish the source VNX (active site) first.
Action Step
Log in to the source VNX (cs100_src) as nasadmin. 1.
Initialize the configuration (active/passive) 45
Configuring
Action Step
Verify the slots on the source VNX by typing: 2.
# /nas/sbin/getreason
Output:
10 - slot_0 primary control station
5 - slot_2 contacted
5 - slot_3 contacted
5 - slot_4 contacted
5 - slot_5 contacted
List the Data Movers (servers) on the source VNX by typing: 3.
# /nas/bin/nas_server -list
Output:
id type acl slot groupID state name
1 1 1000 2 0 server_2
2 4 1000 3 0 server_3
3 1 1000 4 0 server_4
4 4 1000 5 0 server_5
Get detailed information about the Data Movers on the source VNX by typing: 4.
# nas_server -info -all
Output:
id = 1
name = server_2
acl = 1000, owner=nasadmin, ID=201
type = nas
slot = 2
member_of =
standby = server_3, policy=auto
status :
defined = enabled
actual = online, active
id = 2
name = server_3
acl = 1000, owner=nasadmin, ID=201
type = standby
slot = 3
member_of =
standbyfor= server_2
status :
defined = enabled
actual = online, ready
46 Using SRDF/A with VNX 7.1
Configuring
Action Step
id = 3
name = server_4
acl = 1000, owner=nasadmin, ID=201
type = nas
slot = 4
member_of =
standby = server_5, policy=auto
status :
defined = enabled
actual = online, ready
id = 4
name = server_5
acl = 1000, owner=nasadmin, ID=201
type = standby
slot = 5
member_of =
standbyfor= server_4
status :
defined = enabled
actual = online, ready
Switch (su) to root by typing: 5.
$ su
Output:
Password:
Start the active/passive SRDF/A initialization process on the source VNX by typing: 6.
# /nas/sbin/nas_rdf -init
Output:
Discover local storage devices ...
Discovering storage (may take several minutes)
done
Start R2 dos client...
done
Start R2 nas client...
done
Contact cs110_dst... is alive
Initialize the configuration (active/passive) 47
Configuring
Action Step
At the prompt, assign the remote standby Data Mover on the destination VNX for each source (primary) Data
Mover on the source VNX. For each source Data Mover on cs100_src, type the slot number of its corresponding
standby on cs110_dst.
7.
Note: The actual standby designation occurs later in the procedure. To determine which Data Movers to designate
as standbys, see VNX Data Mover configuration checklist on page 31. Data Movers are referred to as servers,
and named server_n, starting with server_2. In the example, server_2 (source Data Mover) and server_3 (local
standby Data Mover) have remote SRDF/A standby Data Movers, that is, local server_2, slot 2, fails over to the
remote server in slot 2, and local server_3, slot 3, fails over to the remote server in slot 3. Also, server_4 and
server_5 are local Data Movers not protected by SRDF (not configured with SRDF/A standby Data Movers).
Example:
Please create an rdf standby for each server listed
server server_2 in slot 2, remote standby in slot [2] (or none): 2
server_2 : done
id = 1
name = server_2
acl = 1000, owner=nasadmin, ID=201
type = nas
slot = 2
member_of =
standby = server_3, policy=auto
RDFstandby= slot=2
status :
defined = enabled
actual = online, active
server server_3 in slot 3, remote standby in slot [3] (or none): 3
server_3 : done
id = 2
name = server_3
acl = 1000, owner=nasadmin, ID=201
type = standby
slot = 3
member_of =
standbyfor= server_2
RDFstandby= slot=3
status :
defined = enabled
actual = online, ready
server server_4 in slot 4, remote standby in slot [4] (or none):
none
server server_5 in slot 5, remote standby in slot [5] (or none):
none
The slot [ 2 3 ] on the remote cs110_dst system must be an rdf
standby Data Mover(s). Currently, the slot [ 2 3 ] is not configured
as an rdf standby Data Mover(s). Please run the nas_rdf -init
command on the remote cs110_dst system.
48 Using SRDF/A with VNX 7.1
Configuring
Initialize the destination VNX
Action Step
Log in to the destination VNX (cs110_dst) as nasadmin. 1.
Verify the slots on the destination VNX by typing: 2.
# /nas/sbin/getreason
Output:
10 - slot_0 primary control station
5 - slot_2 contacted
5 - slot_3 contacted
5 - slot_4 contacted
5 - slot_5 contacted
List the Data Movers (servers) on the destination VNX by typing: 3.
# /nas/bin/nas_server -list
Output:
id type acl slot groupID state name
1 1 1000 2 0 server_2
2 1 1000 3 0 server_3
3 1 1000 4 0 server_4
4 4 1000 5 0 server_5
Note: The output indicates that server_2 and server_3 are free to be configured as SRDF/A standbys for the
source VNX (cs100_src), because they are not local standbys.
Get detailed information about the destination Data Movers on the destination VNX by typing: 4.
# nas_server -info -all
Output:
id = 1
name = server_2
acl = 1000, owner=nasadmin, ID=201
type = nas
slot = 2
member_of =
standby =
status :
defined = enabled
actual = online, ready
Initialize the configuration (active/passive) 49
Configuring
Action Step
id = 2
name = server_3
acl = 1000, owner=nasadmin, ID=201
type = nas
slot = 3
member_of =
standby =
status :
defined = enabled
actual = online, ready
id = 3
name = server_4
acl = 1000, owner=nasadmin, ID=201
type = nas
slot = 4
member_of =
standby = server_5, policy=auto
status :
defined = enabled
actual = online, ready
id = 4
name = server_5
acl = 1000, owner=nasadmin, ID=201
type = standby
slot = 5
member_of =
standbyfor= server_4
status :
defined = enabled
actual = online, ready
Switch (su) to root by typing: 5.
$ su
Output:
Password:
Initialize the destination VNX for active/passive SRDF/A by typing: 6.
# /nas/sbin/nas_rdf -init
Output:
Discover local storage devices ...
Discovering storage (may take several minutes)
done
Start R2 dos client...
done
Start R2 nas client...
done
Contact cs100_src ... is alive
50 Using SRDF/A with VNX 7.1
Configuring
Action Step
At the prompt, create the RDF administrative account (rdfadmin) on the Control Station.
Note: Use the rdfadmin account to manage the destination SRDF/A standby Data Movers on cs110_dst that
provide the failover capability for cs100_src. The RDF site passphrase must consist of 615 characters, and
be the same for the source and destination VNX systems.
Example:
Create a new login account to manage the RDF site CELERRA
Caution: For an active-active configuration, avoid using the same
UID that was used for the rdfadmin account on the other side.
New login username and UID (example: rdfadmin:500): rdfadmin:500
done
New UNIX password:
BAD PASSWORD: it is based on a dictionary word
Retype new UNIX password:
Changing password for user rdfadmin.
passwd: all authentication tokens updated successfully.
done
operation in progress (not interruptible)...
id = 1
name = cs100_src
owner = 500
device = /dev/ndj1
channel = rdev=/dev/ndg, off_MB=391; wdev=/dev/nda, off_MB=391
net_path = 192.168.97.140
celerra_id = 000190100582034D
passphrase = rdfadmin
Discover remote storage devices ...done
The following servers have been detected on the system:
id type acl slot groupID state name
1 1 1000 2 0 server_2
2 1 1000 3 0 server_3
3 1 1000 4 0 server_4
4 4 1000 5 0 server_5
Note: The IDs of the Data Movers in the id column are used in the next step.
7.
Initialize the configuration (active/passive) 51
Configuring
Action Step
At the prompt, designate the Data Movers on the destination VNX as SRDF/A standby Data Movers for the
source VNX. To designate the standby Data Movers, type the ID number (not the slot number) of a single Data
Mover or ID numbers for multiple servers, separated by spaces.
8.
Note: For failover capability, each standby Data Mover previously assigned to a source (primary) Data Mover
on cs100_src must be designated as a standby on cs110_dst. Review the Data Mover list created in VNX Data
Mover configuration checklist on page 31 to determine which servers to designate as standbys. Ensure that
you review the checklist if you have an NS series gateway configuration with two Data Movers, and a default
local standby Data Mover. During the source VNX initialization process, the term "slots" identifies the remote
SRDF/A standby Data Mover. During the destination VNX initialization process, the term "server_ids" identifies
the SRDF/A standby Data Mover.
CAUTION Before you set up an SRDF standby Data Mover at the destination site, check the Data
Mover configurations at the source and destination sites. Verify that the SRDF standby Data Mover is
in the appropriate slot. Verify that the hardware, including network interface adapters, is the same for
the source and SRDF standby Data Movers for failover purposes. Also, ensure that the destination
Data Movers do not have any local standbys defined, as provided in the default installation, for example,
server_5 is, by default, a standby for server_2, server_3, and server_4.
Example:
Please enter the id(s) of the server(s) you wish to reserve
(separated by spaces) or "none" for no servers.
Select server(s) to use as standby: 1 2
id = 1
name = server_2
acl = 2000, owner=rdfadmin, ID=500
type = standby
slot = 2
member_of =
standbyfor=
status :
defined = enabled
actual = boot_level=0
id = 2
name = server_3
acl = 2000, owner=rdfadmin, ID=500
type = standby
slot = 3
member_of =
standbyfor=
status :
defined = enabled
actual = boot_level=0
operation in progress (not interruptible)...
52 Using SRDF/A with VNX 7.1
Configuring
Action Step
id = 1
name = cs100_src
owner = 500
device = /dev/ndj1
channel = rdev=/dev/ndg, off_MB=391; wdev=/dev/nda, off_MB=391
net_path = 192.168.97.140
celerra_id = 000190100582034D
passphrase = rdfadmin
Please create a rdf standby for each server listed
server server_5 in slot 5, remote standby in slot [5] (or none):
none
server server_4 in slot 4, remote standby in slot [4] (or none):
none
Note: If you use the nas_server -list command to list the available Data Movers on the destination VNX
(cs110_dst), only server_4 and server_5 appear. Data Movers server_2 and server_3 are unavailable to the
nasadmin account because they are now managed by the rdfadmin account.
Exit root by typing: 9.
# exit
Output:
exit
Note: The initialization process is complete and the active/passive configuration is established. The destination
VNX is ready to provide full file system access and functionality if a source site failure occurs.
Important: If the IP address of the Control Station changes after the initialization process begins, rerun
the /nas/sbin/nas_rdf -init command to accommodate the change. If you change any hostnames or IP
addresses and want to accommodate the change before you run the initialization process, edit and
update the /etc/hosts file and ensure that each host can resolve its node name.
Verify SRDF/A on the source VNX (active/passive)
After initialization, run VNX CLI commands, informational Solutions Enabler Symmetrix CLI
(SYMCLI) commands, or both to verify SRDF/A on the source VNX.
Action Step
Log in to the source VNX (cs110_src) as nasadmin. 1.
Initialize the configuration (active/passive) 53
Configuring
Action Step
List the file systems on the source VNX by typing: 2.
$ nas_fs -list
Output
id inuse type acl volume name server
1 n 1 0 74 root_fs_1
2 y 1 0 76 root_fs_2 1
3 y 1 0 78 root_fs_3 2
4 y 1 0 80 root_fs_4 3
5 y 1 0 82 root_fs_5 4
6 n 1 0 84 root_fs_6
7 n 1 0 86 root_fs_7
8 n 1 0 88 root_fs_8
9 n 1 0 90 root_fs_9
10 n 1 0 92 root_fs_10
11 n 1 0 94 root_fs_11
12 n 1 0 96 root_fs_12
13 n 1 0 98 root_fs_13
14 n 1 0 100 root_fs_14
15 n 1 0 102 root_fs_15
16 y 1 0 104 root_fs_common 2,1,3
17 n 5 0 137 root_fs_ufslog
18 n 5 0 140 root_fs_d3
19 n 5 0 141 root_fs_d4
20 n 5 0 142 root_fs_d5
21 n 5 0 143 root_fs_d6
22 n 1 0 156 fs1
23 y 1 0 157 fs2 1
24 n 1 0 161 fs8k
25 n 1 0 162 fs32k
26 n 1 0 163 fs64k
27 n 1 0 164 ufs1
28 n 5 0 165 root_panic_reserve
29 n 1 0 229 ufs1_snap1
31 y 1 0 233 fs1a 1
32 y 1 0 234 fst1 1
35 y 1 0 292 fs5 3
54 Using SRDF/A with VNX 7.1
Configuring
Action Step
List the device groups on the source VNX by typing: 3.
$ /nas/symcli/bin/symdg list
Output
D E V I C E G R O U P S
Number of
Name Type Valid Symmetrix ID Devs GKs BCVs VDEVs TGTs
1R1_3 RDF1 Yes 000190100582 70 0 0 0 0
1REG REGULAR Yes 000190100582 48 0 64 0 0
Note:
x In the sample configuration, these device groups represent the following: 1R1_3 represents
the SRDF/A-protected R1 data volumes and 1REG represents the local (non-SRDF/A)
volumes. The device groups in the configuration might have different names.
x Use only Solutions Enabler Symmetrix SYMCLI informational commands on the VNX
device group; do not invoke any Solutions Enabler Symmetrix SYMCLI action commands
using the Control Station host component, laptop service processor, or EMC ControlCen-
ter. The Solutions Enabler Symmetrix SRDF family documentation on the EMC Online
Support website provides more information.
View key SRDF/A information for the device group representing the R1 data volumes (for example, 1R1_3) by
typing:
4.
$ /nas/symcli/bin/symrdf -g 1R1_3 query -rdfa
Note: Ensure that you provide the appropriate device group name with the command. Information you can
monitor is highlighted in the output. Also, the output uses ...? to indicate a continuation of the device entries;
not all entries are listed.
Output
Device Group (DG) Name : 1R1_3
DG's Type : RDF1
DG's Symmetrix ID : 000190100582
RDFA Session Number : 2
RDFA Cycle Number : 139
RDFA Session Status : Active
RDFA Minimum Cycle Time : 00:00:30
RDFA Avg Cycle Time : 00:00:30
Duration of Last cycle : 00:00:30
RDFA Session Priority : 33
Tracks not Committed to the R2 Side: 36
Time that R2 is behind R1 : 00:00:36
RDFA R1 Side Percent Cache In Use : 0
RDFA R2 Side Percent Cache In Use : 0
Transmit Idle Time : 00:00:00
Source (R1) View Target (R2) View MODES
Initialize the configuration (active/passive) 55
Configuring
Action Step
--------------------------- -------------------- ---- --------
ST LI ST
Standard A N A
Logical T R1 Inv R2 Inv K T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MDAC STATE
--------------------------- -- ------------------- ---- ---------
d1 034D RW 0 0 RW 034D WD 0 0 A... Consistent
d2 034E RW 0 0 RW 034E WD 0 0 A... Consistent
d3 034F RW 0 0 RW 034F WD 0 0 A... Consistent
d4 0350 RW 0 0 RW 0350 WD 0 0 A... Consistent
d5 0351 RW 0 0 RW 0351 WD 0 0 A... Consistent
d6 0352 RW 0 0 RW 0352 WD 0 0 A... Consistent
d7 035A RW 0 0 RW 035A WD 0 0 A... Consistent
d8 035B RW 0 0 RW 035B WD 0 0 A... Consistent
d9 035C RW 0 0 RW 035C WD 0 0 A... Consistent
d10 035D RW 0 0 RW 035D WD 0 0 A... Consistent
...
d138 0397 RW 0 0 RW 0397 WD 0 0 A... Consistent
d139 0398 RW 0 0 RW 0398 WD 0 0 A... Consistent
d140 0399 RW 0 0 RW 0399 WD 0 0 A... Consistent
Total -------- ----- ------ ------
Track(s) 0 0 0 0
MB(s) 0.0 0.0 0.0 0.0
Legend for MODES:
M(ode of Operation): A = Async, S = Sync, E = Semi-sync, C = Adaptive
Copy
D(omino) : X = Enabled, . = Disabled
A(daptive Copy) : D = Disk Mode, W = WP Mode, . = ACp off
C(onsistency State): X = Enabled, . = Disabled, - = N/A
Note: Check the session status, average cycle time, tracks not committed to R2, time R2 is behind R1 (in sec-
onds), as well as the mode (A for asynchronous) and RDF pair state (Consistent).
Activate a failover (active/passive)
Activating a failover enables the destination VNX to assume the source (primary) role in the
active/passive SRDF/A configuration.
Prerequisites
You might decide to perform a planned (graceful) failover for testing purposes, which
involves preparation before the failover is activated, or you might need to activate an
unplanned failover in response to a disaster scenario.
In an unplanned failover scenario, assume that the source VNX attached to a source
Symmetrix system is unavailable, and requires a failover to the destination VNX.
Regardless of the planned or unplanned failover scenario, you always activate a failover
by running the /nas/sbin/nas_rdf -activate command from the destination VNX.
56 Using SRDF/A with VNX 7.1
Configuring
To activate an active/passive SRDF/A failover:
1. Prepare for a graceful failover on page 57
2. Activate a failover from the destination VNX on page 59
3. Verify SRDF/A after failover activation on page 63
4. Ensure access after failover on page 69
CAUTION For sites with redundant Control Stations, ensure that all SRDF management
commands including /nas/sbin/nas_rdf -init, -activate, and -restore are run from the primary
Control Station located in slot 0 (CS0). Ensure that CS1 is powered off at both sites before
you run any -activate or -restore commands. When the CS1 shutdown process completes,
type /nas/sbin/getreason and check the output to verify the shutdown. The output should
contain the line 0 - slot_1 powered off.
Prepare for a graceful failover
Action Step
Log in to the source VNX (cs100_src) as nasadmin and switch (su) to root. 1.
Halt all Data Movers and restart the source Control Station (CS0) for the source VNX by typing: 2.
# /nas/sbin/nas_halt now
Output:
************************** WARNING! ****************************
You are about to HALT this Celerra including all of its Control
Stations and Data Movers.
DATA will be UNAVAILABLE when the system is halted.
Note that this command does *not* halt the storage array.
Note: As an alternative to nas_halt, you can run server_cpu ALL -halt now &, to halt all the Data Movers.
However, this requires an explicit restart of the source Control Station (CS0) with -reboot -f -n. For an NS600,
server_cpu is the recommended command.
Activate a failover (active/passive) 57
Configuring
Action Step
At the prompt, confirm the halting by typing: 3.
yes
Example:
ARE YOU SURE YOU WANT TO CONTINUE? [ yes or no ] : yes
Sending the halt signal to the Master Control Daemon...: Done
Feb 7 18:39:35 cs100_src EMCServer: nas_mcd: Check and halt other
CS...: Done
Feb 7 18:40:00 cs100_src get_datamover_status: Data Mover server_2:
COMMAND doesnt match.
Feb 7 18:40:00 cs100_src get_datamover_status: Data Mover server_5:
COMMAND doesnt match.
Feb 7 18:40:01 cs100_src get_datamover_status: Data Mover server_3:
COMMAND doesnt match.
Feb 7 18:40:01 cs100_src get_datamover_status: Data Mover server_4:
COMMAND doesnt match.
Feb 7 18:40:12 cs100_src setup_enclosure: Executing -dhcpd stop op-
tion
Feb 7 18:40:12 cs100_src snmptrapd[4743]: Stopping snmptrapd
Feb 7 18:40:12 cs100_src EV_AGENT[7847]: Signal TERM received
Feb 7 18:40:12 cs100_src EV_AGENT[7847]: Agent is going down
Feb 7 18:40:24 cs100_src DHCPDMON: Starting DHCPD on CS 0
Feb 7 18:40:26 cs100_src setup_enclosure: Executing -dhcpd start
option
Feb 7 18:40:26 cs100_src dhcpd: Internet Software Consortium DHCP
Server V3.0pl1
Feb 7 18:40:26 cs100_src dhcpd: Copyright 1995-2001 Internet Software
Consortium.
Feb 7 18:40:26 cs100_src dhcpd: All rights reserved.
Feb 7 18:40:26 cs100_src dhcpd: For info, please visit
http://www.isc.org/products/DHCP
Feb 7 18:40:26 cs100_src dhcpd: Wrote 0 deleted host decls to leases
file.
Feb 7 18:40:26 cs100_src dhcpd: Wrote 0 new dynamic host decls to
leases file.
Feb 7 18:40:26 cs100_src dhcpd: Wrote 0 leases to leases file.
Feb 7 18:40:26 cs100_src dhcpd: Listening on
LPF/eth2/00:00:f0:9c:1f:d5/128.221.253.0/24
Feb 7 18:40:26 cs100_src dhcpd: Sending on
LPF/eth2/00:00:f0:9c:1f:d5/128.221.253.0/24
Feb 7 18:40:26 cs100_src dhcpd: Listening on
LPF/eth0/00:00:f0:9d:00:e2/128.221.252.0/24
Feb 7 18:40:26 cs100_src dhcpd: Sending on
LPF/eth0/00:00:f0:9d:00:e2/128.221.252.0/24
Feb 7 18:40:26 cs100_src dhcpd: Sending on Socket/fallback/fallback-
net
Feb 7 18:40:40 cs100_src mcd_helper: : Failed to umount /nas (0)
Feb 7 18:40:40 cs100_src EMCServer: nas_mcd: Failed to gracefully
shutdown MCD and halt
servers. Forcing halt and reboot...
Feb 7 18:40:40 cs100_src EMCServer: nas_mcd: Halting all servers...
58 Using SRDF/A with VNX 7.1
Configuring
Activate a failover from the destination VNX
Note: In a true disaster scenario where the source Symmetrix DMX system is unavailable, contact
your local EMC Customer Support Representative to coordinate restoration activities.
Action Step
Log in to the destination VNX (cs110_dst) as nasadmin.
Note: The activation must always be performed from the destination VNX.
1.
List the device groups on the destination VNX by typing: 2.
$ /nas/symcli/bin/symdg list
Output:
D E V I C E G R O U P S
Number of
Name Type Valid Symmetrix ID Devs GKs BCVs VDEVs TGTs
1R1_4 RDF1 Yes 000190100557 6 0 0 0 0
1REG REGULAR Yes 000190100557 0 0 62 0 0
1R2_500_3 RDF2 Yes 000190100557 70 0 0 0 0
Note: In the sample cs110_dst configuration, these device groups represent the following: 1R1_4 represents
the R1 volumes of the destination Control Station, 1REG represents the local (non-SRDF/A) volumes, and
1R2_500_3 represents the R2 data volumes. The device groups in the configuration might have different names.
These steps apply to both types of failover scenarios (graceful or disaster recovery). However, the sample output
is based on a graceful failover activation.
Query the destinations SRDF/A information for the device group representing the R2 data volumes (on cs110_dst,
1R2_500_3) by typing:
3.
$ /nas/symcli/bin/symrdf -g 1R2_500_3 query -rdfa
Note: The output uses "..." to indicate a continuation of the device entries; not all device entries are listed.
Output:
Device Group (DG) Name : 1R2_500_3
DG's Type : RDF2
DG's Symmetrix ID : 000190100557
RDFA Session Number : 2
RDFA Cycle Number : 41
Activate a failover (active/passive) 59
Configuring
Action Step
RDFA Session Status : Active
RDFA Minimum Cycle Time : 00:00:30
RDFA Avg Cycle Time : 00:00:30
Duration of Last cycle : 00:00:30
RDFA Session Priority : 33
Tracks not Committed to the R2 Side : 1276
Time that R2 is behind R1 : 00:00:32
RDFA R1 Side Percent Cache In Use : 0
RDFA R2 Side Percent Cache In Use : 0
Transmit Idle Time : 00:00:00
Target (R2) View Source (R1) View MODES
-------------------------- -------------------- ---- --------
ST LI ST
Standard A N A
Logical T R1 Inv R2 Inv K T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MDAC STATE
-------------------------------- -- -------------- ---- --------
DEV001 034D WD 0 0 RW 034D RW 0 0 A... Consistent
DEV002 034E WD 0 0 RW 034E RW 0 0 A... Consistent
DEV003 034F WD 0 0 RW 034F RW 0 0 A... Consistent
DEV004 0350 WD 0 0 RW 0350 RW 0 0 A... Consistent
DEV005 0351 WD 0 0 RW 0351 RW 0 0 A... Consistent
DEV006 0352 WD 0 0 RW 0352 RW 0 0 A... Consistent
DEV007 035A WD 0 0 RW 035A RW 0 0 A... Consistent
DEV008 035B WD 0 0 RW 035B RW 0 0 A... Consistent
DEV009 035C WD 0 0 RW 035C RW 0 0 A... Consistent
DEV010 035D WD 0 0 RW 035D RW 0 0 A... Consistent
...
DEV067 0396 WD 0 0 RW 0396 RW 0 0 A... Consistent
DEV068 0397 WD 0 0 RW 0397 RW 0 0 A... Consistent
DEV069 0398 WD 0 0 RW 0398 RW 0 0 A... Consistent
DEV070 0399 WD 0 0 RW 0399 RW 0 0 A... Consistent
Total -------- -------- -------- --------
Track(s) 0 0 0 0
MB(s) 0.0 0.0 0.0 0.0
Legend for MODES:
M(ode of Operation): A = Async, S = Sync, E = Semi-sync, C = Adaptive
Copy
D(omino) : X = Enabled, . = Disabled
A(daptive Copy) : D = Disk Mode, W = WP Mode, . = ACp off
C(onsistency State): X = Enabled, . = Disabled, - = N/A
Note: The output uses "..." to indicate a continuation of the device entries; not all device entries are listed.
Switch (su) to rdfadmin by typing: 4.
$ su - rdfadmin
Output:
Password:
60 Using SRDF/A with VNX 7.1
Configuring
Action Step
Switch (su) to root by typing: 5.
$ su
Output:
Password:
Activate the failover by typing: 6.
# /nas/sbin/nas_rdf -activate
CAUTION For sites with redundant Control Stations, the /nas/sbin/nas_rdf -activate command must be
run from the primary Control Station located in slot 0 (CS0). Ensure that CS1 is powered off at both sites
before you run the nas_rdf -activate command, and CS1 remains powered off for the duration of the
activate and subsequent restore operations.
Note: Ensure that all Data Movers at the source are halted before proceeding. At the destination, do not shut
down or restart any Data Movers, that is, SRDF-protected or non-SRDF Data Movers, while the nas_rdf -activate
or nas_rdf - restore command is running. This might interrupt the communication of VNX with the backend and
cause the command to fail.
At the prompt, if this is a true disaster scenario, ensure that you have powered off the source VNX:
Is remote site cs100_src completely shut down (power OFF)?
Note: This prompt serves as a reminder to shut down the source VNX in a disaster scenario or for failover
testing. The nas_rdf -activate command will automatically shut down the source VNX if the command detects
that the source is still operational.
7.
At the next prompt, confirm the SRDF/A activation by typing: 8.
yes
Example:
Do you wish to continue [yes or no]: yes
Successfully pinged (Remotely) Symmetrix ID: 000190100582
Note: The ping success message appears if the Symmetrix DMX system attached to the source VNX is suc-
cessfully contacted, indicating it is operational and ready on the SRDF/A link. Activating the destination VNX
write-enables the destination volumes and write-disables the source volumes, as long as SRDF/A and the source
Symmetrix system are operational.
Activate a failover (active/passive) 61
Configuring
Action Step
An RDF 'Failover' operation execution is in progress for device group
'1R2_500_3'.
Please wait...
Write Disable device(s) on SA at source (R1)..............Done.
Suspend RDF link(s).......................................Started.
Suspend RDF link(s).......................................Done.
Suspend RDF link(s).......................................Started.
Suspend RDF link(s).......................................Done.
Read/Write Enable device(s) on RA at target (R2)..........Done.
Suspend RDF link(s).......................................Started.
Suspend RDF link(s).......................................Done.
The RDF 'Failover' operation successfully executed for device group
'1R2_500_3'.
Note: A file system check is executed on the R2 control volumes of the sources Control Station file systems.
Waiting for nbs clients to die ... done
/net/500 /etc/auto.500 -t 1,ro
/dev/ndj1: recovering journal
/dev/ndj1: clean, 11591/231360 files, 204230/461860 blocks
fsck 1.26 (3-Feb-2002)
Waiting for nbs clients to die ... done
id type acl slot groupID state name
1 1 1000 2 0 server_2
2 4 1000 3 0 server_3
3 1 1000 4 0 server_4
4 4 1000 5 0 server_5
Note: The SRDF/A standby Data Movers now become active.
server_2 :
server_2 : going offline
rdf : going active
replace in progress ...done
failover activity complete
commit in progress (not interruptible)...done
done
server_3 :
server_3 : going offline
rdf : going active
replace in progress ...done
failover activity complete
commit in progress (not interruptible)...done
done
server_4 :
Error 4003: server_4 : standby is not configured
server_5 :
Error 4003: server_5 : standby is not configured
62 Using SRDF/A with VNX 7.1
Configuring
Action Step
Exit root by typing: 9.
# exit
exit
Note: The failover is complete.
Note: In a true disaster scenario where the source Symmetrix DMX system is unavailable, contact
your local EMC Customer Support Representative to coordinate restoration activities.
Verify SRDF/A after failover activation
Action Step
Log in to the destination VNX (cs110_dst) as rdfadmin. 1.
List the available Data Movers at the destination by typing: 2.
$ /nas/bin/nas_server -list
id type acl slot groupID state name
1 1 0 2 0 server_2
2 4 0 3 0 server_3
Note:
x This command shows the Data Mover server table with the ID, type, access control level
(ACL) value, slot number, group ID, state, and name of the Data Mover.
x You can run VNX CLI commands, informational SYMCLI commands, or both to verify the
SRDF/A configuration on the write-enabled destination VNX.
Activate a failover (active/passive) 63
Configuring
Action Step
List and manage the file systems mounted for the Data Movers now active at the destination by typing: 3.
$ /nas/bin/server_mount ALL
server_2 :
root_fs_2 on / uxfs,perm,rw
root_fs_common on /.etc_common uxfs,perm,ro
fs2 on /fs2 uxfs,perm,rw
fs1a on /fs1a uxfs,perm,rw
fst1 on /fst1 uxfs,perm,rw
server_3 :
root_fs_3 on / uxfs,perm,rw,<unmounted>
root_fs_common on /.etc_common uxfs,perm,ro,<unmounted>
Note: The standby Data Movers activated at the destination acquire the IP and MAC addresses, file systems,
and export tables of their counterparts on the source and have read/write access to all the R2 volumes on the
destination.
List interface configuration information for Data Mover server_2 by typing: 4.
$ /nas/bin/server_ifconfig server_2 -all
Output:
server_2 :
cge1 protocol=IP device=cge1
inet=192.168.97.147 netmask=255.255.255.0 broadcast=192.168.97.255
UP, ethernet, mtu=1500, vlan=0, macaddr=0:60:16:9:32:71
loop protocol=IP device=loop
inet=127.0.0.1 netmask=255.0.0.0 broadcast=127.255.255.255
UP, loopback, mtu=32768, vlan=0, macaddr=0:0:0:0:0:0 netname=local-
host
el31 protocol=IP device=mge1
inet=128.221.253.2 netmask=255.255.255.0 broadcast=128.221.253.255
UP, ethernet, mtu=1500, vlan=0, macaddr=0:60:16:5:5d:23 netname=lo-
calhost
el30 protocol=IP device=mge0
inet=128.221.252.2 netmask=255.255.255.0 broadcast=128.221.252.255
UP, ethernet, mtu=1500, vlan=0, macaddr=0:60:16:5:5d:24 netname=lo-
calhost
64 Using SRDF/A with VNX 7.1
Configuring
Action Step
Display disk information after the failover activation by typing: 5.
$ nas_disk -list
id inuse sizeMB storageID-devID type name servers
1 y 11619 000190300281-002C STD root_disk 1,2
2 y 11619 000190300281-002D STD root_ldisk 1,2
3 y 2077 000190300281-002E STD d3 1,2
4 y 2077 000190300281-002F STD d4 1,2
5 y 2077 000190300281-0030 STD d5 1,2
6 y 31874 000190300281-0031 STD d6 1,2
7 n 22999 000190300281-0083 STD d7 1,2
8 n 22999 000190300281-0084 STD d8 1,2
9 n 22999 000190300281-0085 STD d9 1,2
10 n 22999 000190300281-0086 STD d10 1,2
11 y 22999 000190300281-0087 STD d11 1,2
12 n 22999 000190300281-0088 STD d12 1,2
13 n 22999 000190300281-0089 STD d13 1,2
14 y 22999 000190300281-008A STD d14 1,2
15 y 22999 000190300281-008B STD d15 1,2
16 y 22999 000190300281-008C STD d16 1,2
17 y 22999 000190300281-008D STD d17 1,2
18 y 22999 000190300281-008E STD d18 1,2
19 n 22999 000190300281-008F STD d19 1,2
20 n 22999 000190300281-0090 STD d20 1,2
Activate a failover (active/passive) 65
Configuring
Action Step
21 n 22999 000190300281-0091 STD d21 1,2
22 n 22999 000190300281-0092 STD d22 1,2
23 n 22999 000190300281-0093 STD d23 1,2
24 n 22999 000190300281-0094 STD d24 1,2
25 n 22999 000190300281-0095 STD d25 1,2
26 n 22999 000190300281-0096 STD d26 1,2
27 n 22999 000190300281-00AB BCV rootd27 1,2
28 n 22999 000190300281-00AC BCV rootd28 1,2
29 n 22999 000190300281-00AD BCV rootd29 1,2
30 n 22999 000190300281-00AE BCV rootd30 1,2
31 n 22999 000190300281-00AF BCV rootd31 1,2
32 y 61424 000190300281-00B6 ATA d32 1,2
33 y 61424 000190300281-00B7 ATA d33 1,2
34 y 61424 000190300281-00B8 ATA d34 1,2
35 y 61424 000190300281-00B9 ATA d35 1,2
36 y 61424 000190300281-00BA ATA d36 1,2
37 y 61424 000190300281-00BB ATA d37 1,2
38 y 61424 000190300281-00BC ATA d38 1,2
39 y 61424 000190300281-00BD ATA d39 1,2
40 y 61424 000190300281-00BE ATA d40 1,2
41 y 61424 000190300281-00BF ATA d41 1,2
42 y 61424 000190300281-00C0 ATA d42 1,2
43 y 61424 000190300281-00C1 ATA d43 1,2
44 n 61424 000190300281-00C2 ATA d44 1,2
45 y 61424 000190300281-00C3 ATA d45 1,2
46 y 61424 000190300281-00C4 ATA d46 1,2
47 y 61424 000190300281-00C5 BCVA rootd47 1,2
48 y 61424 000190300281-00C6 BCVA rootd48 1,2
49 y 61424 000190300281-00C7 BCVA rootd49 1,2
50 y 61424 000190300281-00C8 BCVA rootd50 1,2
51 n 61424 000190300281-00C9 BCVA rootd51 1,2
52 n 61424 000190300281-00CA R1ATA d52 1,2
53 n 61424 000190300281-00CB R1ATA d53 1,2
54 y 61424 000190300281-00CC R1ATA d54 1,2
55 y 61424 000190300281-00CD R1ATA d55 1,2
56 y 61424 000190300281-00CE R1ATA d56 1,2
57 n 61424 000190300281-00D4 R1BCA rootd57 1,2
58 n 61424 000190300281-00D5 R1BCA rootd58 1,2
59 n 61424 000190300281-00D6 R1BCA rootd59 1,2
60 n 61424 000190300281-00D7 R1BCA rootd60 1,2
61 n 61424 000190300281-00D8 R1BCA rootd61 1,2
Display information for the Data Movers and the file systems at the destination after the failover activation, including
the available disk space for all Data Movers or file systems and how much of the total capacity of a file system
has been used by typing:
6.
$ /nas/bin/server_df ALL
66 Using SRDF/A with VNX 7.1
Configuring
Action Step
Output:
[server_2 :
server_2 :
Filesystem kbytes used avail capacity Mounted on
fst1 28798424 600 28797824 0% /fst1
fs1a 9224 600 8624 7% /fs1a
fs2 460787024 600 460786424 0% /fs2
root_fs_common 13624 5264 8360 39% /.etc_common
root_fs_2 114592 776 113816 1% /
server_3 :
Error 2: server_3 : No such file or directory
no error message in response
Query the key SRDF/A information now existing for the destination device group with the data volumes (on
cs110_dst, 1R2_500_3) by using this command syntax:
7.
$ /nas/symcli/bin/symrdf -g <group> query -rdfa
Where:
<group> = name of the device group
Note: Use only Solutions Enabler Symmetrix SYMCLI informational commands on the VNX device group; do
not invoke any Solutions Enabler Symmetrix SYMCLI action commands using the Control Station host component,
laptop service processor, or EMC ControlCenter. The Solutions Enabler Symmetrix SRDF family documentation
on the EMC Online Support website provides more information.
Note: The session state is now Inactive, the mode is A for asynchronous, and the RDF pair state is now Failed
Over. Also note the output uses ...? to indicate a continuation of the device entries, because not all device entries
are listed.
Example:
$ /nas/symcli/bin/symrdf -g 1R2_500_3 query -rdfa
Output:
Device Group (DG) Name : 1R2_500_3
DG's Type : RDF2
DG's Symmetrix ID : 000190100557
Activate a failover (active/passive) 67
Configuring
Action Step
RDFA Session Number : 2
RDFA Cycle Number : 0
RDFA Session Status : Inactive RDFA Minimum Cycle
Time : 00:00:30
RDFA Avg Cycle Time : 00:00:00
Duration of Last cycle :00:00:00
RDFA Session Priority : 33
Tracks not Committed to the R2 Side: 0 Time
that R2 is behind R1 : 00:00:00
RDFA R1 Side Percent Cache In Use : 0 RDFA R2
Side Percent Cache In Use : 0
Transmit Idle Time : 00:00:00
Target (R2) View Source (R1) View MODES
-------------------------- -------------------- ----- --------
ST LI ST
Standard A N A
Logical T R1 Inv R2 Inv K T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MDAC STATE
-------------------------- -- -------------------- ---- --------
DEV001 034D RW 20 0 NR 034D WD 0 0 A... Failed Over
DEV002 034E RW 2048 0 NR 034E WD 0 0 A... Failed Over
DEV003 034F RW 0 0 NR 034F WD 0 0 A... Failed Over
DEV004 0350 RW 0 0 NR 0350 WD 0 0 A... Failed Over
DEV005 0351 RW 1204 0 NR 0351 WD 0 0 A... Failed Over
DEV006 0352 RW 0 0 NR 0352 WD 0 0 A... Failed Over
DEV007 035A RW 0 0 NR 035A WD 0 0 A... Failed Over
DEV008 035B RW 0 0 NR 035B WD 0 0 A... Failed Over
DEV009 035C RW 0 0 NR 035C WD 0 0 A... Failed Over
DEV010 035D RW 0 0 NR 035D WD 0 0 A... Failed Over
...
DEV060 038F RW 0 0 NR 038F WD 0 0 A... Failed Over
DEV061 0390 RW 0 0 NR 0390 WD 0 0 A... Failed Over
DEV062 0391 RW 0 0 NR 0391 WD 0 0 A... Failed Over
DEV063 0392 RW 0 0 NR 0392 WD 0 0 A... Failed Over
DEV064 0393 RW 0 0 NR 0393 WD 0 0 A... Failed Over
DEV065 0394 RW 2 0 NR 0394 WD 0 0 A... Failed Over
DEV066 0395 RW 2 0 NR 0395 WD 0 0 A... Failed Over
DEV067 0396 RW 0 0 NR 0396 WD 0 0 A... Failed Over
DEV068 0397 RW 0 0 NR 0397 WD 0 0 A... Failed Over
DEV069 0398 RW 0 0 NR 0398 WD 0 0 A... Failed Over
DEV070 0399 RW 0 0 NR 0399 WD 0 0 A... Failed Over
Total ------ ----- ------ ----
Track(s) 3280 0 0 0
MB(s) 102.5 0.0 0.0 0.0
Legend for MODES:
M(ode of Operation): A = Async, S = Sync, E = Semi-sync, C = Adaptive
Copy
D(omino) : X = Enabled, . = Disabled
A(daptive Copy) : D = Disk Mode, W = WP Mode, . = ACp off
C(onsistency State): X = Enabled, . = Disabled, - = N/A
68 Using SRDF/A with VNX 7.1
Configuring
Ensure access after failover
If you have not accounted for different IP subnets at the source and destination sites, perform
these steps after a failover to ensure that users have access to the same file systems, using
the same network addresses as they did on the source-site VNX, provided they have network
access to the destination-site VNX.
Note: You can perform these steps either manually or by creating and running a script. If you perform
these steps at the destination site after activation, ensure that you perform them at the source site
after the /nas/sbin/nas_rdf -restore command completes. These steps are required at the source to
return everything to the original configuration. Restore the source VNX on page 69 provides information
about the restore procedures.
Action Step
Halt CIFS. 1.
Set IP addresses and default routes. 2.
Adjust services such as WINs, DNS, NIS, and NTP. 3.
Restart CIFS. 4.
Restore the source VNX
A restore of the source VNX is a planned, scheduled event performed by or under the
guidance of your local EMC Customer Support Representative to ensure continuity between
the Symmetrix DMX systems.
To restore the source VNX:
1. Prepare for the restore on page 70
2. Restore from the destination on page 70
CAUTION For sites with redundant Control Stations, ensure that all SRDF management
commands including nas_rdf -init, -activate, and -restore are run from the primary Control Station
located in slot 0 (CS0). Always ensure that CS1 is powered off at both sites before you run any
-activate or -restore commands. When the CS1 shutdown process completes, type
/nas/sbin/getreason and check the output to verify the shutdown. The output should contain the
line 0 - slot_1 powered off. Because this is a planned event, ensure that you keep CS1 powered
off for the duration of the event.
Activate a failover (active/passive) 69
Configuring
Prepare for the restore
Note: Perform these steps under the guidance of your local EMC Customer Support Representative.
Action Step
Verify proper SRDF/A operations by performing a complete system check of the Symmetrix system and SRDF/A.
CAUTION
x If this is a true disaster scenario, keep the source VNX (cs100_src) powered off
until you are instructed to power it up by your local EMC Customer Support
Representative. Your local EMC Customer Support Representative ensures
proper operation of the Symmetrix system attached to the source VNX.
x Proceed only after confirmation from your EMC Customer Support Representative.
1.
Power up the source VNX, ensuring that the source Control Station restarts, and perform a restart by typing: 2.
# reboot -f -n
Note: You can now perform the restore from the destination VNX.
Restore from the destination
The restoration process occurs in two phases:
x
The device-group update phase provides network clients continued access to the
destination VNX while the destination Symmetrix system updates the source Symmetrix
system. The length of the update phase is based on the amount of data changed since
the destination system was activated.
x
The device-group failback phase is typically short in duration, usually under 20 minutes,
and network clients are suspended from file system access while the file systems fully
synchronize from the destination to the source Symmetrix system.
Action Step
Log in to the destination VNX (cs110_dst) as rdfadmin and switch (su) to root. 1.
70 Using SRDF/A with VNX 7.1
Configuring
Action Step
Start the restore of the source VNX by typing: 2.
# /nas/sbin/nas_rdf -restore
CAUTION
x For sites with redundant Control Stations, ensure that the nas_rdf -restore command
is run from the primary Control Station located in slot 0 (CS0). Always ensure that
CS1 is powered off at both sites before you run the - restore command. When the
CS1 shutdown process completes, type /nas/sbin/getreason and check the output
to verify the shutdown. The output should contain the line 0 - slot_1 powered off.
Because this is a planned event, ensure that you keep CS1 powered off for the
duration of the event.
x Do not shut down or restart any Data Movers (SRDF-protected or non-SRDF Data
Movers) while the nas_rdf -restore command is running. This might interrupt the
communication of VNX with the storage system and cause the command to fail.
x Proceed only after your EMC Customer Support Representative has verified that
the source Symmetrix DMX system and SRDF/A link are operational.
At the prompt, to continue restoration and begin the device-group update phase, type: 3.
yes
Example:
Is remote site cs100_src ready for Storage restoration?
Do you wish to continue [yes or no]: yes
Contact cs100_src ... is alive
At the next prompt, to continue the restore process, type: 4.
yes
Example:
Restore will now reboot the source side control station.
Do you wish to continue? [yes or no]: yes
CAUTIONThe source Symmetrix system must be operational and all volumes must be ready on the link.
Note: The system verifies that the Symmetrix devices are in proper condition to be restored at the source site.
Note that the output uses ...? to indicate a continuation of the device entries, because not all device entries are
listed.
Device Group (DG) Name : 1R2_500_3
DG's Type : RDF2
DG's Symmetrix ID : 000190100557
Restore the source VNX 71
Configuring
Action Step
Target (R2) View Source (R1) View MODES
----------------------------- ------------------ --- --------
ST LI ST
Standard A N A
Logical T R1 Inv R2 Inv K T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MDA STATE
----------------------------- -- ----------------- --- --------
DEV001 034D RW 20 0 NR 034D WD 0 0 A.. Failed Over
DEV002 034E RW 2048 0 NR 034E WD 0 0 A.. Failed Over
DEV003 034F RW 0 0 NR 034F WD 0 0 A.. Failed Over
DEV004 0350 RW 0 0 NR 0350 WD 0 0 A.. Failed Over
DEV005 0351 RW 1204 0 NR 0351 WD 0 0 A.. Failed Over
DEV006 0352 RW 0 0 NR 0352 WD 0 0 A.. Failed Over
DEV007 035A RW 0 0 NR 035A WD 0 0 A.. Failed Over
DEV008 035B RW 0 0 NR 035B WD 0 0 A.. Failed Over
DEV009 035C RW 0 0 NR 035C WD 0 0 A.. Failed Over
DEV010 035D RW 0 0 NR 035D WD 0 0 A.. Failed Over
DEV011 035E RW 0 0 NR 035E WD 0 0 A.. Failed Over
DEV012 035F RW 0 0 NR 035F WD 0 0 A.. Failed Over
...
DEV065 0394 RW 2 0 NR 0394 WD 0 0 A.. Failed Over
DEV066 0395 RW 2 0 NR 0395 WD 0 0 A.. Failed Over
DEV067 0396 RW 0 0 NR 0396 WD 0 0 A.. Failed Over
DEV068 0397 RW 0 0 NR 0397 WD 0 0 A.. Failed Over
DEV069 0398 RW 0 0 NR 0398 WD 0 0 A.. Failed Over
DEV070 0399 RW 0 0 NR 0399 WD 0 0 A.. Failed Over
Total ------ ------ ------- ------
Track(s) 3280 0 0 0
MB(s) 102.5 0.0 0.0 0.0
Legend for MODES:
M(ode of Operation): A = Async, S = Sync, E = Semi-sync, C = Adaptive
Copy
D(omino) : X = Enabled, . = Disabled
A(daptive Copy) : D = Disk Mode, W = WP Mode, . = ACp off
Note:
x The update of the source Symmetrix system now begins by automatically setting the
SRDF device group mode to SYNC (synchronous) for the update and failback.
x The file systems and shares are still available to the network clients during the update
phase. Under certain conditions, volumes might not be in the proper state to fail back, in
which case the restore command exits. If this occurs, Chapter 4 can provide information
about the errors that might occur. If necessary, contact your local EMC Customer Support
Representative.
72 Using SRDF/A with VNX 7.1
Configuring
Action Step
+++++ Setting RDF group 1R2_500_3 to SYNC mode.
An RDF 'Update R1' operation execution is in progress for device
group '1R2_500_3'. Please wait...
Suspend RDF link(s).......................................Done.
Merge device track tables between source and target.......Started.
Devices: 036B-037C in (0557,03).......................... Merged.
Devices: 034D-0352, 035A-036A in (0557,03)............... Merged.
Devices: 037D-038E in (0557,03).......................... Merged.
Devices: 038F-0399 in (0557,03).......................... Merged.
Merge device track tables between source and target.......Done.
Resume RDF link(s)........................................Started.
Resume RDF link(s)........................................Done.
The RDF 'Update R1' operation successfully initiated for device group
'1R2_500_3'.
Note: The device-group update phase is now complete. The next step begins the device-group failback phase.
At the next prompt, to begin the network restoration phase, type: 5.
yes
Note: Ensure that the source CS0 is operational and on the data network.
Example:
Is remote site cs100_src ready for Network restoration?
Do you wish to continue [yes or no]: yes
Note: The SRDF/A standby Data Movers are halted and the destination file systems and shares become unavail-
able to network clients while the Symmetrix DMX systems (source and destination) fully synchronize.
server_2 : done
server_3 : done
server_4 :
Error 4003: server_4 : standby is not configured
server_5 :
Error 4003: server_5 : standby is not configured
/dev/ndj1: clean, 11595/231360 files, 204256/461860 blocks
fsck 1.26 (3-Feb-2002)
Waiting for nbs clients to die ... done
Waiting for nbs clients to die ... done
/net/500 /etc/auto.500 -t 0,rw,sync
Waiting for 1R2_500_3 access ...done
Note: The R2 devices on the destination are set to read-only while the Symmetrix systems fully synchronize.
Note: The failback operation now occurs.
Restore the source VNX 73
Configuring
Action Step
An RDF 'Failback' operation execution is in progress for device group
'1R2_500_3'.
Please wait...
Write Disable device(s) on RA at target (R2)..............Done.
Suspend RDF link(s).......................................Done.
Merge device track tables between source and target.......Started.
Devices: 036B-037C in (0557,03).......................... Merged.
Devices: 034D-0352, 035A-036A in (0557,03)............... Merged.
Devices: 038F-0399 in (0557,03).......................... Merged.
Devices: 037D-038E in (0557,03).......................... Merged.
Merge device track tables between source and target.......Done.
Resume RDF link(s)........................................Started.
Resume RDF link(s)........................................Done.
Read/Write Enable device(s) on SA at source (R1)..........Done.
The RDF 'Failback' operation successfully executed for device group
'1R2_500_3'.
Waiting for 1R2_500_3 sync ....done
Note: The device-group failback phase is complete.
At the prompt, set the device group to run in SRDF/A mode by typing: 6.
yes
Example:
Starting restore on remote site cs100_src ...
Waiting for nbs clients to start ... done
Waiting for nbs clients to start ... done
Suspend RDF link(s).......................................Done.
server_2 :
server_2 : going standby
rdf : going active
replace in progress ...done
failover activity complete
commit in progress (not interruptible)...done
done
server_3 :
server_3 : going standby
rdf : going active
replace in progress ...done
failover activity complete
commit in progress (not interruptible)...done
done
74 Using SRDF/A with VNX 7.1
Configuring
Action Step
server_4 :
Error 4003: server_4 : standby is not configured
server_5 :
Error 4003: server_5 : standby is not configured
Resume RDF link(s)........................................Started.
Resume RDF link(s)........................................Done.
If the RDF device groups were setup to operate in ASYNCHRONOUS (
SRDF/A ) mode, now would
be a good time to set it back to that mode.
Would you like to set device group 1R2_500_3 to ASYNC Mode ? [yes
or no]: yes
An RDF Set 'Asynchronous Mode' operation execution is in progress
for device group
'1R2_500_3'. Please wait...
The RDF Set 'Asynchronous Mode' operation successfully executed for
device group
'1R2_500_3'.
Starting Services on remote site cs100_src ...done
Exit root by typing: 7.
# exit
Output:
exit
Exit rdfadmin by typing: 8.
$ exit
Output:
logout
Note: The restoration is complete.
Note: After the restore process completes, you should be able to log in to the source VNX (cs100_src),
and manage it directly from the source nasadmin account. Chapter 4 provides information about what
to do if an error occurs during the restore process, or if you encounter problems with the restored VNX.
If necessary, contact your local EMC Customer Support Representative.
Restore the source VNX 75
Configuring
76 Using SRDF/A with VNX 7.1
Configuring
4
Troubleshooting
As part of an effort to continuously improve and enhance the performance
and capabilities of its product lines, EMC periodically releases new versions
of its hardware and software. Therefore, some functions described in this
document may not be supported by all versions of the software or hardware
currently in use. For the most up-to-date information on product features,
refer to your product release notes.
If a product does not function properly or does not function as described
in this document, contact your EMC Customer Support Representative.
Problem Resolution Roadmap for VNX contains additional information
about using the EMC Online Support website and resolving problems.
Topics included are:
x
EMC E-Lab Interoperability Navigator on page 78
x
Known problems and limitations on page 78
x
Error messages on page 98
x
EMC Training and Professional Services on page 99
Using SRDF/A with VNX 7.1 77
EMC E-Lab Interoperability Navigator
The EMC E-Lab

Interoperability Navigator is a searchable, web-based application that


provides access to EMC interoperability support matrices. It is available on the EMC Online
Support website at http://Support.EMC.com. After logging in, locate the applicable Support
by Product page, find Tools, and click E-Lab Interoperability Navigator.
Known problems and limitations
This section provides information on how to:
x Retrieve information from log files on page 78
x Resolve initialization failures on page 79
x Resolve activation failures on page 84
x Resolve restore failures on page 87
x Resolve Data Mover failure after failover activation on page 95
x Handle additional error situations on page 98
Retrieve information from log files
Normally, system messages are reported to the system log files. To retrieve information
from log files:
x Check the system log (sys_log) by using the server_log command
x Check the command error log (cmd_log.err) for message information
To retrieve substantial SRDF/A logging information:
x Use the /nas/tools/collect_support_materials script, which collects data, such as disaster
recovery information, from the following log files:

/nas/log/dr_log.al

/nas/log/dr_log.al.rll

/nas/log/dr_log.al.err

/nas/log/dr_log.al.trace*

/nas/log/symapi.log*
These log files can also be viewed individually.
78 Using SRDF/A with VNX 7.1
Troubleshooting
x To monitor these logs while the nas_rdf command is running, check the file in the /tmp
directory. After the command completes, the logs appear in the /nas/log directory.
To gather more data after a failure, such as a failed restore, access the following sources:
x Disaster recovery (dr*) files Provide state changes, as well as other key informational
messages
x The symapi.log file Logs storage-related errors
Resolve initialization failures
This section provides a sample failed initialization scenario in which two destination Data
Movers (server_2 and server_3), intended to serve as SRDF/A standbys for two source
production Data Movers, already have a local standby Data Mover (server_5). This results
in an invalid configuration, that is, Data Movers serving as SRDF/A standbys cannot have
a local standby Data Mover.
This section includes:
x
Example 1 for initialization failure on page 79
x
Resolution for initialization failure example 1 on page 81
x
Example 2 for initialization failure on page 82
Example 1 for initialization failure
Example 1 for initialization failure
[root@cs110_dst nasadmin]# /nas/sbin/nas_rdf -init
Discover local storage devices ...
Discovering storage (may take several minutes)
done
Start R2 dos client ...
done
Start R2 nas client ...
done
Contact cs100_src ... is alive
Please create a new login account to manage RDF site cs100_src
New login: rdfadmin
New password:
BAD PASSWORD: it is based on a dictionary word
Retype new password:
Changing password for user rdfadmin
passwd: all authentication tokens updated successfully
done
Known problems and limitations 79
Troubleshooting
Example 1 for initialization failure
Please enter the passphrase for RDF site cs100_src:
Passphrase:
rdfadmin
Retype passphrase:
rdfadmin
operation in progress (not interruptible)...
id = 1
name = cs100_src
owner = 500
device = /dev/ndj1
channel = rdev=/dev/ndg, off_MB=391; wdev=/dev/nda, off_MB=391
net_path = 192.168.97.140
celerra_id = 000190100582034D
passphrase = rdfadmin
Discover remote storage devices ...done
The following servers have been detected on the system (cs110_dst):
id type acl slot groupID state name
1 1 1000 2 0 server_2
2 1 1000 3 0 server_3
3 1 1000 4 0 server_4
4 4 1000 5 0 server_5
Please enter the id(s) of the server(s) you wish to reserve
(separated by spaces) or "none" for no servers.
Select server(s) to use as standby:1 2
server_2 : Error 4031: server_2 : server_2 has a standby
server: server_5
server_3 : Error 4031: server_3 : server_3 has a standby
server: server_5
operation in progress (not interruptible)...
id = 1
name = cs100_src
owner = 500
device = /dev/ndj1
channel = rdev=/dev/ndg, off_MB=391; wdev=/dev/nda, off_MB=391
net_path = 192.168.97.140
celerra_id = 000190100582034D
passphrase = rdfadmin
Please create a rdf standby for each server listed
server server_2 in slot 2, remote standby in slot [2] (or none):none
server server_3 in slot 3, remote standby in slot [3] (or none):none
server server_4 in slot 4, remote standby in slot [4] (or none):none
server server_5 in slot 5, remote standby in slot [5] (or none):none
80 Using SRDF/A with VNX 7.1
Troubleshooting
Resolution for initialization failure example 1
Action Step
List and verify the servers by typing: 1.
# nas_server -list
Output:
id type acl slot groupID state name
1 1 1000 2 0 server_2
2 1 1000 3 0 server_3
3 1 1000 4 0 server_4
4 4 1000 5 0 server_5
Delete the local standby relationship by typing: 2.
# server_standby server_2 -delete mover=server_5
Output:
server_2 : done
Delete.....by typing: 3.
[root@cs110_dst nasadmin]# /nas/sbin/getreason
Output:
10 - slot_0 primary control station
5 - slot_2 contacted
5 - slot_3 contacted
5 - slot_4 contacted
5 - slot_5 contacted
Delete.....by typing: 4.
[root@cs110_dst nasadmin]# /nas/sbin/getreason
Output:
10 - slot_0 primary control station
5 - slot_2 contacted
5 - slot_3 contacted
5 - slot_4 contacted
5 - slot_5 contacted
Delete the.....by typing: 5.
[root@cs110_dst nasadmin]# /nas/bin/nas_server -list
Output:
id type acl slot groupID state name
1 1 1000 2 0 server_2
2 1 1000 3 0 server_3
3 1 1000 4 0 server_4
4 4 1000 5 0 server_5
Known problems and limitations 81
Troubleshooting
Action Step
Delete.....by typing: 6.
[root@cs110_dst nasadmin]# /nas/bin/nas_server -info -all
Output:
id = 1
name = server_2
acl = 1000, owner=nasadmin, ID=201
type = nas
slot = 2
member_of =
standby =
status :
defined = enabled
actual = online, ready
id = 2
name = server_3
acl = 1000, owner=nasadmin, ID=201
type = nas
slot = 3
member_of =
standby =
status :
defined = enabled
actual = online, ready
id = 3
name = server_4
acl = 1000, owner=nasadmin, ID=201
type = nas
slot = 4
member_of =
standby = server_5, policy=auto
status :
defined = enabled
actual = online, ready
id = 4
name = server_5
acl = 1000, owner=nasadmin, ID=201
type = standby
slot = 5
member_of =
standbyfor= server_4
status :
defined = enabled
actual = online, ready
Example 2 for initialization failure
The following example highlights the error occurred when a file system is mounted on a
local Data Mover, intended to serve as an RDF standby. The example shows new prompts
for the user to change the configuration on another window before proceeding with the
initialization.
82 Using SRDF/A with VNX 7.1
Troubleshooting
Example 2 for initialization failure
[root@cs110_dst nasadmin]# /nas/sbin/nas_rdf -init
Discover local storage devices ...
Discovering storage (may take several minutes)
done
Please create a rdf standby for each server listed
server server_2 in slot 2, remote standby in slot [2] (or none): 2
Error 3122: server_2 : filesystem is unreachable: rl64k
Server server_2 has local file system mounted.
Please unmount those file system in another window and try again.
Do you wish to continue? [yes or no]:
********************
$ nas_server -i server_2
id = 1
name = server_2
acl = 1000, owner=nasadmin, ID=201
type = nas
slot = 2
member_of =
standby = server_3, policy=auto
RDFstandby= slot=2
status :
defined= enabled
actual = online, active
$ server_mount server_2 rl64k /rl64k
server_2 : done
Warning 17716815751: server_2 :has a standby server: rdf, filesystem:
rl64k is local, will not be able to failover
[nasadmin@cs100_src ~]$ server_df server_2
server_2 :
Filesystem kbytes used avail capacity Mounted on
rl64k 230393504 124697440 105696064 54% /rl64
mc1 230393504 45333944 185059560 20% /mc1
mc2 460787024 327825496 132961528 71% /mc2
mc1a_ckpt1 230393504 410408 229983096 0% /mc1a_ckpt1
mc1a 230393504 410424 229983080 0% /mc1a
rl32krdf 230393504 696 230392808 0% /caddata/wdc1/32k
root_fs_common 13624 5288 8336 9% /.etc_common
root_fs_2 231944 6152 225792 3% /
Known problems and limitations 83
Troubleshooting
Resolve activation failures
This section provides a sample failed activation scenario in which a local file system is
mounted on an SRDF-protected standby Data Mover. The error conditions are illustrated
and the corrective commands are listed after the error.
This section includes:
x
Example for activation failure on page 84
x
Resolution for activation failure on page 85
Example for activation failure
Example of activation failure
[root@cs110_dst rdfadmin]# /nas/sbin/nas_rdf -activate
Is remote site cs100_src completely shut down (power OFF)?
Do you wish to continue? [yes or no]: yes
Successfully pinged (Remotely) Symmetrix ID: 000190100582
An RDF 'Failover' operation execution is in progress for device group
'1R2_500_3'.
Please wait...
Write Disable device(s) on SA at source (R1)..............Done.
Suspend RDF link(s).......................................Done.
Read/Write Enable device(s) on RA at target (R2)..........Done.
The RDF 'Failover' operation successfully executed for
device group '1R2_500_3'.
Waiting for nbs clients to die ... done
/net/500 /etc/auto.500 -t 1,ro
/dev/ndj1: recovering journal
/dev/ndj1: clean, 11587/231360 files, 204164/461860 blocks
fsck 1.26 (3-Feb-2002)
Waiting for nbs clients to die ... done
id type acl slot groupID state name
1 1 1000 2 0 server_2
2 4 1000 3 0 server_3
3 1 1000 4 0 server_4
4 4 1000 5 0 server_5
server_2 :
server_2 : going offline
rdf : going active
replace in progress ...failed
failover activity complete
84 Using SRDF/A with VNX 7.1
Troubleshooting
Example of activation failure
replace_storage:
replace_volume: volume is unreachable
d141,d142,d143,d144,d145,d146,d147,d148
server_3 :
server_3 : going offline
rdf : going active
replace in progress ...done
failover activity complete
commit in progress (not interruptible)...done
done
server_4 :
Error 4003: server_4 : standby is not configured
server_5 :
Error 4003: server_5 : standby is not configured
An RDF 'Update R1' operation execution is in progress for device
'DEV001' in group '1R2_500_3'. Please wait...
Suspend RDF link(s).......................................Done.
Merge device track tables between source and target.......Started.
Device: 034D in (0557,03)................................ Merged.
Merge device track tables between source and target.......Done.
Resume RDF link(s)........................................Started.
Resume RDF link(s)........................................Done.
The RDF 'Update R1' operation successfully initiated for device
'DEV001' in group '1R2_500_3'.
Resolution for activation failure
Action Step
List and verify the servers by typing: 1.
# nas_server -list
Output:
id type acl slot groupID state name
1 1 1000 2 2 server_2.faulted.rdf
2 4 0 3 0 server_3
3 1 1000 4 0 server_4
4 4 1000 5 0 server_5
Unmount all non-SRDF (local) file systems from the Data Mover that failed to activate (in this case, server_2)
by typing:
2.
[root@cs110_dst rdfadmin] # server_umount server_2.faulted.rdf - perm fs5
Output:
server_2.faulted.rdf : done
Known problems and limitations 85
Troubleshooting
Action Step
Manually activate SRDF for the Data Mover that originally failed by typing: 3.
[root@cs110_dst rdfadmin]# server_standby server_2.faulted.rdf - activate rdf
Output:
server_2.faulted.rdf :
server_2.faulted.rdf : going standby
rdf : going active
replace in progress ...done
failover activity complete
commit in progress (not interruptible)...done
done
Verify the configuration by typing: 4.
[root@cs110_dst rdfadmin]# /nas/sbin/getreason
Output:
10 - slot_0 primary control station
5 - slot_2 contacted
5 - slot_3 contacted
5 - slot_4 contacted
5 - slot_5 contacted
Delete.....by typing: 5.
[root@cs110_dst rdfadmin] # /nas/bin/nas_server -list
Output:
id type acl slot groupID state name
1 1 0 2 0 server_2
2 4 0 3 0 server_3
3 1 1000 4 0 server_4
4 4 1000 5 0 server_5
Delete.....by typing: 6.
[root@cs110_dst rdfadmin]# /nas/bin/server_mount ALL
Output:
server_2 :
root_fs_2 on / uxfs,perm,rw
root_fs_common on /.etc_common uxfs,perm,ro
fs2 on /fs2 uxfs,perm,rw
fs1a on /fs1a uxfs,perm,rw
fst1 on /fst1 uxfs,perm,rw
server_3 :
root_fs_3 on / uxfs,perm,rw,<unmounted>
root_fs_common on /.etc_common uxfs,perm,ro,<unmounted>
server_4 :
root_fs_4 on / uxfs,perm,rw
root_fs_common on /.etc_common uxfs,perm,ro
server_5 :
root_fs_5 on / uxfs,perm,rw,<unmounted>
root_fs_common on /.etc_common uxfs,perm,ro,<unmounted>
86 Using SRDF/A with VNX 7.1
Troubleshooting
Resolve restore failures
This section provides sample restore failure scenarios.
Note: The source-side Control Station 0 must be operational for the restore, and VNX for file services
on the source site start only after the restore process successfully completes. A Waiting for Disaster
Recovery to complete message appears in the /var/log/messages and the source remains in that state
until the restore completes. This change, which involves use of a DR lock, ensures correct sequential
operation, thereby ensuring that the source-site services come up correctly under RDF control and
that no user commands run until the source site is completely restored. Error messages on page 98
provides more information about the associated errors.
Topics included are:
x
Example 1 for restoration failure on page 87
x
Resolution for restoration failure example 1 on page 91
x
Example 2 for restoration failure (NS series gateway) on page 92
x
Resolution for restoration failure example 2 on page 92
x
Example 3 for restoration failure (database lock error) on page 93
x
Resolution for restoration failure example 3 on page 95
Example 1 for restoration failure
This example shows an active/passive restore operation started by root from rdfadmin on
cs110_dst using the nas_rdf -restore command. The output shows the events leading up to
the error message on the destination VNX, followed by output from the source after taking
corrective action.
Example 1 restore failure
[root@cs110_dst rdfadmin]# /nas/sbin/nas_rdf -restore
Is remote site cs100_src ready for Storage restoration?
Do you wish to continue? [yes or no]: yes
Contact cs100_src ...
Unable to contact node cs100_src at 192.168.96.58.
Do you wish to continue? [yes or no]: yes
Device Group (DG) Name : 1R2_500_11
DG's Type : RDF2
DG's Symmetrix ID : 000187940255
Known problems and limitations 87
Troubleshooting
Example 1 restore failure
Target (R2) View Source (R1) View MODES
-------------------------------- ------------------ ---- --------
ST LI ST
Standard A N A
Logical T R1 Inv R2 Inv K T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MDA STATE
-------------------------------- -- -------- ------- ---- --------
DEV001 0000 RW 12 0 NR 0000 WD 0 0 A.. Failed Over
DEV002 0001 RW 4096 0 NR 0001 WD 0 0 A.. Failed Over
DEV003 000D RW 1 0 NR 000D WD 0 0 A.. Failed Over
DEV004 000E RW 1 0 NR 000E WD 0 0 A.. Failed Over
DEV005 000F RW 0 0 NR 000F WD 0 0 A.. Failed Over
DEV006 0010 RW 0 0 NR 0010 WD 0 0 A.. Failed Over
DEV007 0011 RW 0 0 NR 0011 WD 0 0 A.. Failed Over
DEV008 0012 RW 0 0 NR 0012 WD 0 0 A.. Failed Over
DEV009 0013 RW 0 0 NR 0013 WD 0 0 A.. Failed Over
DEV010 0014 RW 0 0 NR 0014 WD 0 0 A.. Failed Over
DEV011 0015 RW 1 0 NR 0015 WD 0 0 A.. Failed Over
DEV012 0016 RW 1 0 NR 0016 WD 0 0 A.. Failed Over
...
DEV161 02D3 RW 0 0 NR 0253 WD 0 0 A.. Failed Over
DEV162 02D7 RW 0 0 NR 0257 WD 0 0 A.. Failed Over
DEV163 0003 RW 0 0 NR 0003 WD 0 0 A.. Failed Over
DEV164 0004 RW 851 0 NR 0004 WD 0 0 A.. Failed Over
DEV165 0005 RW 0 0 NR 0005 WD 0 0 A.. Failed Over
Total -------- --------- ------ ------
Track(s) 4965 0 0 0
MB(s) 155.2 0.0 0.0 0.0
Legend for MODES:
M(ode of Operation): A = Async, S = Sync, E = Semi-sync, C = Adaptive
Copy
D(omino) : X = Enabled, . = Disabled
A(daptive Copy) : D = Disk Mode, W = WP Mode, . = ACp off
+++++ Setting RDF group 1R2_500_11 to SYNC mode.
An RDF 'Update R1' operation execution is in progress for device group
'1R2_500_11'.
Please wait...
Suspend RDF link(s).......................................Done.
Merge device track tables between source and target.......Started.
Devices: 0000-0001 ...................................... Merged.
Devices: 0003-0005 ...................................... Merged.
Devices: 000D-0015 ...................................... Merged.
Devices: 0016-001E ...................................... Merged.
Devices: 001F-0027 ...................................... Merged.
88 Using SRDF/A with VNX 7.1
Troubleshooting
Example 1 restore failure
Devices: 0028-0030 ...................................... Merged.
Devices: 0031-0039 ...................................... Merged.
Devices: 003A-0042 ...................................... Merged.
Devices: 0043-004B ...................................... Merged.
Devices: 004C-0054 ...................................... Merged.
Devices: 0055-005D ...................................... Merged.
Devices: 005E-0066 ...................................... Merged.
Devices: 0067-006F ...................................... Merged.
Devices: 0070-0078 ...................................... Merged.
Devices: 0079-0081 ...................................... Merged.
Devices: 0082-008A ...................................... Merged.
Devices: 008B-008C ...................................... Merged.
Devices: 01DB-01E1 ...................................... Merged.
Devices: 01E2-01E8 ...................................... Merged.
Devices: 01E9-01EF ...................................... Merged.
Devices: 01F0-01F6 ...................................... Merged.
Devices: 01F7-01FD ...................................... Merged.
Devices: 01FE-0204 ...................................... Merged.
Devices: 0205-020B ...................................... Merged.
Devices: 020C-0212 ...................................... Merged.
Devices: 0213-0219 ...................................... Merged.
Devices: 021A-0220 ...................................... Merged.
Devices: 0221-0227 ...................................... Merged.
Devices: 0228-022E ...................................... Merged.
Devices: 022F-0235 ...................................... Merged.
Devices: 0236-023C ...................................... Merged.
Devices: 023D-0243 ...................................... Merged.
Devices: 0244-024A ...................................... Merged.
Devices: 024B-0251 ...................................... Merged.
Devices: 0252-0258 ...................................... Merged.
Devices: 0259-025A ...................................... Merged.
Merge device track tables between source and target.......Done.
Resume RDF link(s)........................................Done.
The RDF 'Update R1' operation successfully initiated for device group
'1R2_500_11'.
Is remote site cs100_src ready for Network restoration?
Do you wish to continue? [yes or no]: yes
server_2 : done
server_3 : done
server_4 :
Error 4003: server_4 : standby is not configured
server_5 :
Error 4003: server_5 : standby is not configured
/dev/sdj1: clean, 11464/231360 files, 164742/461860 blocks
fsck 1.26 (3-Feb-2002)
/net/500 /etc/auto.500 -t 0,rw,sync
Waiting for 1R2_500_11 access ...done
Known problems and limitations 89
Troubleshooting
Example 1 restore failure
An RDF 'Failback' operation execution is in progress for device group
'1R2_500_11'.
Please wait...
Write Disable device(s) on RA at target (R2)..............Done.
Suspend RDF link(s).......................................Done.
Merge device track tables between source and target.......Started.
Devices: 0000-0001 ...................................... Merged.
Devices: 0003-0005 ...................................... Merged.
Devices: 000D-0015 ...................................... Merged.
Devices: 0016-001E ...................................... Merged.
Devices: 001F-0027 ...................................... Merged.
Devices: 0028-0030 ...................................... Merged.
Devices: 0031-0039 ...................................... Merged.
Devices: 003A-0042 ...................................... Merged.
Devices: 0043-004B ...................................... Merged.
Devices: 004C-0054 ...................................... Merged.
Devices: 0055-005D ...................................... Merged.
Devices: 005E-0066 ...................................... Merged.
Devices: 0067-006F ...................................... Merged.
Devices: 0070-0078 ...................................... Merged.
Devices: 0079-0081 ...................................... Merged.
Devices: 0082-008A ...................................... Merged.
Devices: 008B-008C ...................................... Merged.
Devices: 01DB-01E1 ...................................... Merged.
Devices: 01E2-01E8 ...................................... Merged.
Devices: 01E9-01EF ...................................... Merged.
Devices: 01F0-01F6 ...................................... Merged.
Devices: 01F7-01FD ...................................... Merged.
Devices: 01FE-0204 ...................................... Merged.
Devices: 0205-020B ...................................... Merged.
Devices: 020C-0212 ...................................... Merged.
Devices: 0213-0219 ...................................... Merged.
Devices: 021A-0220 ...................................... Merged.
Devices: 0221-0227 ...................................... Merged.
Devices: 0228-022E ...................................... Merged.
Devices: 022F-0235 ...................................... Merged.
Devices: 0236-023C ...................................... Merged.
Devices: 023D-0243 ...................................... Merged.
Devices: 0244-024A ...................................... Merged.
Devices: 024B-0251 ...................................... Merged.
Devices: 0252-0258 ...................................... Merged.
Devices: 0259-025A ...................................... Merged.
Merge device track tables between source and target.......Done.
Resume RDF link(s)........................................Done.
Read/Write Enable device(s) on SA at source (R1)..........Done.
The RDF 'Failback' operation successfully executed for device group
'1R2_500_11'.
Waiting for 1R2_500_11 sync ...done
Starting restore on remote site cs100_src ...
failed
----------------------------------------------------------------
Please execute /nasmcd/sbin/nas_rdf -restore on remote site cs100_src
----------------------------------------------------------------
[root@cs100_src rdfadmin]#
90 Using SRDF/A with VNX 7.1
Troubleshooting
Note: The failure occurs after the failback operation is executed for the device group, when the restore
is set to begin on the source VNX.
Resolution for restoration failure example 1
Action Step
Delete.....by typing: 1.
[root@cs100_src nasadmin]# /nasmcd/sbin/nas_rdf -restore
server_2 : rdf : reboot in progress ............
server_2 : going standby
rdf : going active
replace in progress ...done
failover activity complete
commit in progress (not interruptible)...done
done
server_3 : rdf : reboot in progress ............
server_3 : going standby
rdf : going active
replace in progress ...done
failover activity complete
commit in progress (not interruptible)...done
done
server_4 :
Error 4003: server_4 : standby is not configured
server_5 :
Error 4003: server_5 : standby is not configured
If the RDF device groups were setup to operate in ASYNCHRONOUS (
SRDF/A ) mode, now would
be a good time to set it back to that mode.
Would you like to set device group 1R1_11 to ASYNC Mode ? [yes or
no]: yes
An RDF Set 'Asynchronous Mode' operation execution is in progress
for device group
'1R1_11'. Please wait...
The RDF Set 'Asynchronous Mode' operation successfully executed
for device group '1R1_11'.
If the RDF device groups were setup to operate in ASYNCHRONOUS (
SRDF/A ) mode, now would
be a good time to set it back to that mode.
Would you like to set device group 1R1_12 to ASYNC Mode ? [yes or
no]: no
Starting Services ...done
[root@cs100_src nasadmin]#
Exit as root. This concludes the device-group failback phase. 2.
Known problems and limitations 91
Troubleshooting
Action Step
Log in to and manage the source VNX directly from the nasadmin account on the source VNX (cs100_src).
If the restoration is still unsuccessful, gather SRDF/A logging information using the script /nas/tools/collect_sup-
port_materials.
3.
Example 2 for restoration failure (NS series gateway)
The following example on an NS600G or NS700G shows a failed restore operation at the
destination.
Example 2 restore failure
[root@Celerra2 nasadmin]# /nas/sbin/nas_rdf restore
...
Starting restore on remote site Celerra1 ...
Waiting for nbs clients to start ... WARNING: Timed out
Waiting for nbs clients to start ... done
CRITICAL FAULT:
Unable to mount /nas/dos
Starting Services on remote site Celerra1 ...done
Note: ..." indicates not all lines of the restore output are shown.
Resolution for restoration failure example 2
Action Step
Stop the services at the source as root by typing: 1.
[root@Celerra1 nasadmin] # /sbin/service nas stop
Perform a restore at the source as root by typing: 2.
[root@Celerra1 nasadmin] # /nasmcd/sbin/nas_rdf restore
Output:
...
Waiting for nbs clients to start ... done
Waiting for nbs clients to start ... done
Suspend RDF link(s).......................................Done.
server_2 :
replace in progress ...done
commit in progress (not interruptible)...done
done
server_3 :
Error 4003: server_3 : standby is not configured
Resume RDF
link(s)........................................Done.
Starting Services ...done
92 Using SRDF/A with VNX 7.1
Troubleshooting
Example 3 for restoration failure (database lock error)
The following example shows a restore error that occurs when a server fails to acquire the
database lock. The restore completes with the error, but resolving the error involves running
the server_standby command at the source for the server involved in the lock contention.
The error is highlighted.
Example 3 restore error
[root@cs0_dst rdfadmin]# /nas/sbin/nas_rdf restore
Is remote site cs100_src ready for Storage restoration?
Do you wish to continue? [yes or no]: yes
Contact cs0_src ... is alive
Target (R2) View Source (R1) View MODES
-------------------------------- -------------------- ---- --------
ST LI ST
Standard A N A
Logical T R1 Inv R2 Inv K T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MDA STATE
-------------------------------- -- ----------------- --- --------
DEV001 08F2 RW 1 0 RW 37CD WD 0 0 C.D R1 Updated
DEV002 08F3 RW 58 0 RW 37CE WD 0 0 C.D R1 Updated
DEV003 08FA RW 0 0 RW 37D3 WD 0 0 C.D R1 Updated
DEV004 08FB RW 0 0 RW 37D4 WD 0 0 C.D R1 Updated
DEV005 08FC RW 12 0 RW 37D5 WD 0 0 C.D R1 Updated
DEV006 08FD RW 0 0 RW 37D6 WD 0 0 C.D R1 Updated
DEV007 092C RW 3546 0 RW 0629 WD 0 0 C.D R1 Updated
DEV008 0930 RW 2562 0 RW 062D WD 0 0 C.D R1 Updated
DEV009 06F5 RW 0 0 RW 0631 WD 0 0 C.D R1 Updated
Total -------- ----- ------ -----
Track(s) 6179 0 0 0
MB(s) 193.1 0.0 0.0 0.0
Legend for MODES:
M(ode of Operation): A = Async, S = Sync, E = Semi-sync, C = Adaptive
Copy
D(omino) : X = Enabled, . = Disabled
A(daptive Copy) : D = Disk Mode, W = WP Mode, . = ACp off
An RDF 'Update R1' operation execution is in progress for device group
'1R2_500_4'.
Please wait...
Suspend RDF link(s).......................................Done.
Merge device track tables between source and target.......Started.
Devices: 37CD-37CE ...................................... Merged.
Devices: 37D3-37D6 ...................................... Merged.
Devices: 0629-0634 ...................................... Merged.
Merge device track tables between source and target.......Done.
Resume RDF link(s)........................................Started.
Resume RDF link(s)........................................Done.
The RDF 'Update R1' operation successfully initiated for device group
'1R2_500_4'.
Known problems and limitations 93
Troubleshooting
Example 3 restore error
Is remote site cs0_src ready for Network restoration?
Do you wish to continue? [yes or no]: yes
server_2 : done
server_3 : done
server_4 :
Error 4003: server_4 : standby is not configured
server_5 :
Error 4003: server_5 : standby is not configured
/dev/ndj1: clean, 10308/231360 files, 175874/461860 blocks
fsck 1.26 (3-Feb-2002)
Waiting for nbs clients to die ... done
Waiting for nbs clients to die ... done
/net/500 /etc/auto.500 -t 0,rw,sync
Waiting for 1R2_500_4 access ...done
An RDF 'Failback' operation execution is in progress for device group
'1R2_500_4'.
Please wait...
Write Disable device(s) on RA at target (R2)..............Done.
Suspend RDF link(s).......................................Done.
Merge device track tables between source and target.......Started.
Devices: 37CD-37CE ...................................... Merged.
Devices: 37D3-37D6 ...................................... Merged.
Devices: 0629-0634 ...................................... Merged.
Merge device track tables between source and target.......Done.
Resume RDF link(s)........................................Started.
Resume RDF link(s)........................................Done.
Read/Write Enable device(s) on SA at source (R1)..........Done.
The RDF 'Failback' operation successfully executed for device group
'1R2_500_4'.
Waiting for 1R2_500_4 sync ...done
Starting restore on remote site cs0_src ...
Waiting for nbs clients to start ... done
Waiting for nbs clients to start ... done
server_2 :
Error 2201: server_2 : unable to acquire lock(s), try later
server_3 :
server_3 : going standby
rdf : going active
replace in progress ...done
failover activity complete
commit in progress (not interruptible)...done
done
server_4 :
Error 4003: server_4 : standby is not configured
server_5 :
Error 4003: server_5 : standby is not configured
94 Using SRDF/A with VNX 7.1
Troubleshooting
Example 3 restore error
If the RDF device groups were setup to operate in ASYNCHRONOUS ( SRDF/A )
mode,
now would be a good time to set it back to that mode.
Would you like to set device group 1R2_500_4 to ASYNC Mode ? [yes or no]:
yes
Starting Services on remote site cs0_src ...
[root@cs0_dst rdfadmin]# exit
[rdfadmin@cs0_dst rdfadmin]$ exit
[nasadmin@cs0_dst nasadmin]$ exit
Resolution for restoration failure example 3
Action
Run the server_standby command on the source VNX for the server that had the lock contention (in this example, server_2).
Example:
[nasadmin@cs0_src nasadmin]$ server_standby server_2 -restore rdf
Output
server_2 :
server_2 : going standby
rdf : going active
replace in progress ...done
failover activity complete
commit in progress (not interruptible)...done
done
server_2 : Nil
Resolve Data Mover failure after failover activation
If a Data Mover develops hardware issues after activate, you can replace the affected Data
Mover and update the hardware information. To update the hardware information, you must
run the setup_slot command first as nasadmin switching (su) to root and then as rdfadmin
switching (su) to root.
Action Step
Log in to the destination VNX (cs110_dst) as nasadmin and switch (su) to root. 1.
Known problems and limitations 95
Troubleshooting
Action Step
Initialize the Data Mover by using this command syntax: 2.
[root@cs110_dst nasadmin]# /nas/sbin/setup_slot -init <x>
where:
<x> = Slot number of the new Data Mover
Example:
To initialize the Data Mover for slot 2, type:
[root@cs110_dst nasadmin]# /nas/sbin/setup_slot -init 2
Initializing server in slot 2 as server_2 ...
Starting PXE service...:done
Reboot server in slot 2, waiting..... 0 0 0 0 0 0 1 1 1 3 3 3 3 3
3 3 4 (154 secs)
Stopping PXE service...:done
Ping server in slot 2 on primary interface ...ok
Ping server in slot 2 on backup interface ...ok
Discover disks attached to server in slot 2 ...
Discovering storage (may take several minutes)
server_2 : done
server_2 : done
server_2 : done
server_2 : done
Synchronize date+time on server in slot 2 ...
server_2 : Mon Aug 17 12:11:05 EDT 2009
server_2 :
Processor = Intel Pentium 4
Processor speed (MHz) = 2800
Total main memory (MB) = 4093
Mother board = CMB-Sledgehammer
Bus speed (MHz) = 800
Bios Version = 03.80
Post Version = Rev. 01.59
server_2 : reboot in progress 0.0.0.0.0.0.0.0.1.1.3.3.3.3.3.4.done
Checking to make sure slot 2 is ready........ 5 5 (63 secs)
Completed setup of server in slot 2 as server_2
This Data Mover (also referred to as Blade) is a MirrorView or RDF
standby Data Mover, log in to the system as rdfadmin and switch
(su) to root, and, regardless of straight (for example, server 2
to server 2) or criss-cross (for example, server 2 to server 3)
configuration, use the CLI command with the same slot id:
/nas/sbin/setup_slot -i 2
[root@cs110_dst nasadmin]#
Exit root by typing: 3.
[root@cs110_dst nasadmin]# exit
exit
Exit nasadmin by typing: 4.
[nasadmin@cs110_dst ~]$ exit
logout
96 Using SRDF/A with VNX 7.1
Troubleshooting
Action Step
Log in to the destination VNX (cs110_dst) as rdfadmin and switch (su) to root. 5.
Initialize the Data Mover by using this command syntax: 6.
[root@cs110_dst rdfadmin]# /nas/sbin/setup_slot -init <x>
where:
<x> = Slot number of the new Data Mover
Example:
To initialize the Data Mover for slot 2, type:
[root@cs110_dst rdfadmin]#
/nas/sbin/setup_slot -init 2
The script will update only hardware related configuration such
as the MAC addresses for the internal network and then reboot the
Data Mover (also referred to as Blade).
server_2 : reboot in progress 0.0.0.0.0.0.0.0.1.1.3.3.3.3.3.4.done
Checking to make sure slot 2 is ready........ 5 5 (64 secs)
Completed setup of server in slot 2 as server_2
[root@cs110_dst rdfadmin]#
CAUTION Ensure that you run the setup_slot command first as nasadmin switching (su) to root and
then as rdfadmin switching (su) to root.
If you run the command as rdfadmin before running it as nasadmin, you will get the following error message:
setup_slot has not been run as nasadmin before running it as rdfad-
min user on this Data Mover (also referred to as Blade) The script
will exit without changing the state of the system or rebooting
it. Please do the following to set up this Data Mover correctly:
1. Initialize the new Data Mover for the nasadmin database by
logging in to the system as nasadmin, switching (su) to root,and
using the CLI command:
/nas/sbin/setup_slot -init 2
2. Initialize the new Data Mover for the rdfadmin database by
logging in to the system as rdfadmin user, switching (su) to root,
and using the CLI command:
/nas/sbin/setup_slot -init 2
Exit root by typing: 7.
[root@cs110_dst rdfadmin]# exit
exit
Exit rdfadmin by typing: 8.
[rdfadmin@cs110_dst ~]$ exit
logout
Known problems and limitations 97
Troubleshooting
Handle additional error situations
x If you shut down or restart any Data Movers (SRDF-protected or non-SRDF Data Movers)
at the destination while the /nas/sbin/nas_rdf -activate or the nas/sbin/nas_rdf -restore
command is running, the Control Station does not find a path to the storage system. With
the communication between VNX and the backend interrupted, the command fails.
Respond by doing the following:
1. Rerun the /nas/sbin/nas_rdf -activate or the /nas/sbin/nas_rdf -restore command after
the Data Mover is operational.
2. Do not shut down or restart any Data Movers at the destination while these commands
are running.
x When you run the -init command, it changes the ACL for the local Data Movers to 1000.
This prevents DR administrators from inadvertently accessing the local Data Movers in
the failed over state. However, this also prevents Global Users in Unisphere from
accessing this Data Mover in the normal state.
To resolve this problem, the ACL for the local Data Movers is no longer changed when
you run the -init command. Instead, the ACL is changed to 1111 during failover when
you run the -activate command. This prevents DR administrators from accessing the
Data Movers after failover and allows Global Users to access them in the normal state.
During failback, when you run the -restore command, the ACL is changed to 0. If you
initially use ACL 1111 for the local Data Movers on the source side, ensure that you
change the ACL from 0 back to 1111 after failback. Alternately, you can change the ACL
for the local Data Movers on the source side to some other value, for example, 1000, to
avoid this manual change.
x For some applications, IP Address configuration must be considered carefully. Otherwise,
this could lead to the data becoming corrupt. Consideration when using applications that
can switch to the NFS copy from the R2 without a restart on page 33 provides more
information.
x When using applications such as Oracle, ensure that you use the correct client-side NFS
mount options. Otherwise, this could result in incorrect data on SRDF or TimeFinder
copies. Consideration when using applications that require transactional consistency on
page 32 provides more information.
Error messages
All event, alert, and status messages provide detailed information and recommended actions
to help you troubleshoot the situation.
To view message details, use any of these methods:
x Unisphere software:
98 Using SRDF/A with VNX 7.1
Troubleshooting
Right-click an event, alert, or status message and select to view Event Details, Alert
Details, or Status Details.

x CLI:

Type nas_message -info <MessageID>, where <MessageID> is the message


identification number.
x Celerra Error Messages Guide:

Use this guide to locate information about messages that are in the earlier-release
message format.
x EMC Online Support website:

Use the text from the error message's brief description or the message's ID to search
the Knowledgebase on the EMC Online Support website. After logging in to EMC
Online Support, locate the applicable Support by Product page, and search for the
error message.
EMC Training and Professional Services
EMC Customer Education courses help you learn how EMC storage products work together
within your environment to maximize your entire infrastructure investment. EMC Customer
Education features online and hands-on training in state-of-the-art labs conveniently located
throughout the world. EMC customer training courses are developed and delivered by EMC
experts. Go to the EMC Online Support website at http://Support.EMC.com for course and
registration information.
EMC Professional Services can help you implement your system efficiently. Consultants
evaluate your business, IT processes, and technology, and recommend ways that you can
leverage your information for the most benefit. From business plan to implementation, you
get the experience and expertise that you need without straining your IT staff or hiring and
training new personnel. Contact your EMC Customer Support Representative for more
information.
EMC Training and Professional Services 99
Troubleshooting
100 Using SRDF/A with VNX 7.1
Troubleshooting
Appendix A
Portfolio of High-Availability
Options
This appendix illustrates the VNX SRDF high-availability configuration
options. Figure 4 on page 102 shows a configuration featuring active/passive
SRDF in either asynchronous mode (SRDF/A) or synchronous mode
(SRDF/S).
Figure 4 on page 102, Figure 5 on page 102, Figure 6 on page 102, Figure
7 on page 103, Figure 8 on page 103, Figure 9 on page 104, and Figure 10
on page 104show disaster recovery and business continuance configurations
featuring SRDF/S only, SRDF links, or both, with TimeFinder/FS NearCopy,
TimeFinder/FS FarCopy, or both, with adaptive copy mode (adaptive copy
disk or disk-pending mode).
Note: Prior to the introduction of SRDF/A, adaptive copy mode was referred to as
asynchronous mode SRDF.
Using SRDF/A with VNX 7.1 101
For more information on the configuration that best fits your business
needs, contact your local EMC sales organization.
VNX
R1 data
volumes
VNX
R2 data
volumes
Dedicated SRDF links
(synchronous or
asynchronous mode)
PFS
VNX
VNX
Symmetrix
Symmetrix
DR mirror of
PFS
Production site
DR recovery site
SRDF
SRDF
SRDF
SRDF
VNX DR over
SRDF/S
VNX-000038
Figure 4. VNX replication and recovery with active/passive synchronous
SRDF (SRDF/S) or asynchronous SRDF (SRDF/A)
VNX
R1 data
volumes
VNX
R2 data
volumes
Dedicated SRDF links
(synchronous mode only)
PFS
VNX
VNX
Symmetrix
Symmetrix
DR mirror of
PFS
Production site
DR recovery +
business
continuance site
SRDF
SRDF
SRDF
SRDF
VNX DR over
SRDF/S
VNX-000039
VNX
local BCV
volumes
NearCopy
snapshot of
PFS
Figure 5. VNX disaster recovery active/passive SRDF/S only with
TimeFinder/FS NearCopy
102 Using SRDF/A with VNX 7.1
Portfolio of High-Availability Options
VNX
R1 data
volumes
VNX
R2 data
volumes
Dedicated SRDF links
(synchronous mode only)
PFS
PFS
VNX
R2 data
volumes
VNX
VNX
Symmetrix
Symmetrix
DR mirror of
PFS
DR mirror of
PFS
VNX
R1 data
volumes
Production + DR
recovery site
Production + DR
recovery site
SRDF
SRDF
SRDF
SRDF
VNX DR over
SRDF/S
VNX-000040
Figure 6. VNX disaster recovery active/active SRDF/S only
VNX
R1 BCV
volumes
VNX
R2 BCV
volumes
BCV snapshot of
PFS
PFS
VNX
VNX
Symmetrix
Symmetrix
Imported FS
Production site
Business
continuance
recovery site
SRDF
SRDF
SRDF
SRDF
Adaptive
copy
VNX-000041
VNX
local STD
volumes
VNX
local STD
volumes
FarCopy
snapshot of
PFS
Dedicated SRDF links
(adaptive copy
write-pending mode)
Figure 7. VNX business continuance with TimeFinder/FS FarCopy (version
5.1)
103
Portfolio of High-Availability Options
VNX
R1 BCV
volumes
VNX
R2 BCV
volumes
Dedicated SRDF links
(adaptive copy
write-pending mode)
BCV snapshot of
PFS
PFS
VNX
VNX
Symmetrix
Symmetrix
Imported FS
Production site
Business
continuance
recovery site
SRDF
SRDF
SRDF
SRDF
Adaptive copy
VNX-000042
VNX
local STD
volumes
VNX
local STD
volumes
FarCopy
snapshot of
PFS
BCV snapshot of
PFS
IP WAN
Control Station IP
Control Station IP
Figure 8. VNX business continuance with TimeFinder/FS FarCopy (version
5.3)
VNX
R1 BCV
volumes
VNX
R1 BCV
volumes
VNX
R2 BCV
volumes
VNX
R2 BCV
volumes
Dedicated SRDF Links
(adaptive copy
write-pending mode)
FarCopy
snapshot of
PFS
VNX
VNX
Symmetrix
VNX Symmetrix
Symmetrix
Imported FS
Imported FS
PFS
Business
continuance
recovery site 1
Business
continuance
recovery
site 2
SRDF
SRDF
SRDF
SRDF
SRDF
SRDF
VNX-000043
VNX
local STD
volumes
VNX
local STD
volumes
VNX
local STD
volumes
FarCopy
snapshot of
PFS
BCV snapshot of
PFS
BCV snapshot of
PFS
IP WAN
Control Station IP
Control
Station IP
Control
Station IP
Production site
VNX FarCopy over
adaptive copy
VNX FarCopy over
adaptive copy
Figure 9. VNX business continuance with redundant FarCopy sites
104 Using SRDF/A with VNX 7.1
Portfolio of High-Availability Options
VNX
R1 BCV
volumes
VNX
R2 BCV
volumes
VNX
R2 BCV
volumes
PFS
PFS
Imported FS
VNX
VNX
Symmetrix
VNX
Symmetrix
Symmetrix
Imported FS
Production site 1
Production site 2
Business
continuance
recovery site
SRDF
(adaptive copy
write-pending mode)
SRDF
(adaptive copy
write-pending mode)
VNX-000044
VNX
local STD
volumes
VNX
local STD
volumes
VNX
local STD
volumes
VNX
local STD
volumes
FarCopy
snapshot of
PFS
FarCopy
snapshot of
PFS
IP WAN
Control Station IP
Control Station IP
Control Station IP
VNX FarCopy over
adaptive copy
BCV snapshot of
PFS
BCV snapshot of
PFS
VNX
R1 BCV
volumes
VNX FarCopy over
adaptive copy
Figure 10. VNX business continuance using TimeFinder/FS FarCopy with
many sites
105
Portfolio of High-Availability Options
106 Using SRDF/A with VNX 7.1
Portfolio of High-Availability Options
Glossary
A
active/active
I IMC

Symmelx

Remle Dala Ialy (SRDI

) IMC MVev/Synhnu
ngualn, a bdelna ngualn vlh lv duln le, eah alng a lhe
landby lhe lhe. Iah VNX e ha blh duln and landby Dala Me. I ne
le a, lhe lhe le lae e and ee lhe enl blh le. I SRDI, eah Symmelx
ylem allnednl ue (duln) andemle delnalnume. I MVev/S,
eah VNX b nguedl hae ue and delnaln LUN and a nleny gu.
active/passive
I SRDI MVev/S ngualn, a undelna elu vhee ne VNX e, vlh
l allahed ylem, ee a lhe ue (duln) e ee and anlhe VNX e, vlh
l allahed lage, ee a lhe delnaln (bau). Th ngualn de ae
aable n lhe eenl lhal lhe ue le unaaabe. An SRDI ngualn eue
Symmelx ylem a baend lage. A MVev/S ngualn eue e
VNX b ee ylem a baend lage.
adaptive copy disk-pending mode
SRDI mde ealn n vhh vle la aumuale n gba memy n lhe a ylem
bee beng enl l lhe emle ylem. Th mde av lhe may and enday ume
l be me lhan ne I/O ul ynhnzaln. The maxmum numbe I/O lhal an be ul
yhnzaln dened ung a maxmum ev aue.
C
Common Internet File System (CIFS)
Ie-hang l baed n lhe Ml See Meage (SM). Il av ue l
hae e ylem e lhe Inlenel and nlanel.
D
delta set
I SRDI/Aynhnu (SRDI/A), a edelemned ye ealn ued aynhnu
hl vle ma ue l a delnaln. Iah dela el nlan gu I/O eng.
The deng lhee I/O managed nleny. I VNX Real

, a el nlan lhe
Using SRDF/A with VNX 7.1 107
b mdaln made l lhe ue e ylem lhal VNX Real ue l udale lhe
delnaln e ylem (ead-ny, nl-n-lme, nlenl ea lhe ue e ylem).
The mnmum dela-el ze 128 M.
dependent write consistency
I SRDI/A, lhe manlenane a nlenl nl-n-lme ea dala belveen a ue and
delnaln lhugh lhe eng and eealn a vle l lhe delnaln n deed,
numbeed dela el.
destination VNX for file
Tem lhe emle (enday) VNX e n an SRDI MVev/S ngualn. The
delnaln VNX e lyay lhe landby de a dale eey ngualn.
Symmelx ngualn len a lhe delnaln VNX e: lhe lagel VNX e.
L
local mirror
Symmelx hadv ume lhal a mele ea a duln ume vlhn lhe ame
lage unl. I lhe duln ume beme unaaabe, I/Onlnue l ue lhe a m
lanaenl l lhe hl.
See also R1 volume.
M
metavolume
On a VNX Ie, a nalenaln ume, vhh an nl d, e, le ume.
A aed a hyeume hye. Iey e ylem mul be ealed n l a unue
melaume.
See also disk volume, slice volume, stripe volume, and volume.
MirrorView Synchronous (MirrorView/S)
Slvae aaln lhal ynhnuy manlan e duln mage (ue LUN)
al a eaale aln l de dale eey aably. The edmage ae nlnuuy
udaled l be nlenl vlh lhe ue, and de lhe ably a landby VNX e l
lae e a aed VNX e n lhe eenl a dale n lhe duln le. Synhnu
emle m (ue and delnaln LUN) eman n ynhnzaln vlh eah lhe
eey I/O. MVev/S eue VNX b baend lage.
Multi-Path File System (MPFS)
VNX e ealue lhal av helegeneu ee vlh MIIS lvae l nuenly
ae, dely e Ibe Channe SCSI hanne, haed dala led n a IMC Symmelx
VNX b lage ylem. MIIS add a ghlveghl l aedIe Mang Il
(IMI) lhal nl meladala ealn.
P
Production File System (PFS)
Iduln Ie Sylemn VNX Ie. AIIS bul n Symmelx ume VNX
LUN and munled n a Dala Me n lhe VNX Ie.
108 Using SRDF/A with VNX 7.1
Glossary
R
R1 volume
SRDI lem denlng lhe ue (may) Symmelx ume.
See also local mirror.
R2 volume
SRDI lem denlng lhe delnaln (enday) Symmelx ume.
See also remote mirror.
remote mirror
I SRDI, a emle m a Symmelx hadv ume hyay aled n a emle
Symmelx ylem. Ung lhe IMC SRDI lehngy, lhe emle ylem |ned l a a
ylem vlh a a m. I lhe a m beme unaaabe, lhe emle m
aebe. I MVev/S, a emle m a LUN med n a deenl VNX b.
Iah emle m nlan a alua ue LUN (may mage) and l euaenl
delnaln LUN (enday mage). I lhe ue le ylem a, lhe delnaln LUN n lhe
m an be mled l lae e, lhu avng ae l dala al a emle aln.
See also R2 volume.
S
SnapSure
On lhe VNX e, a ealue lhal de ead-ny, nl-n-lme e, a nvn a
henl, a e ylem.
SRDF/Asynchronous (SRDF/A)
SRDI exlended-dlane ealn aly dng a elalabe, nl-n-lme emle ea
lhal ag nl a behnd lhe ue. Ung SRDI/A vlh VNX e de deendenl vle
nleny hl vle m a ue VNX e/Symmelx DMX

ylem a l a
delnaln VNX e/Symmelx DMXylema lhugh edelemnedlme ye (dela
el).
SRDF/Synchronous (SRDF/S)
SRDI mele dale eey ngualn ln lhal de ynhnzed, ea-lme
mng e ylem dala belveen lhe ue Symmelx ylem and ne me emle
Symmelx ylem al a mled dlane (u l 200 m). SRDI/S an nude IMC
TmeInde

/IS NeaCy n lhe ngualn (ale/ae, ale/ale').


Symmetrix Remote Data Facility (SRDF)
IMC lehngy lhal av lv me Symmelx ylem l manlan a emle m
dala n me lhan ne aln. The ylem an be aledvlhn lhe ame aly, n a amu,
hunded me aal ung be dedaled hgh-eed ul. The SRDI amy
ealn lvae e au ee hgh-aaably ngualn, uh a
SRDI/Synhnu (SRDI/S) and SRDI/Aynhnu (SRDI/A).
Using SRDF/A with VNX 7.1 109
Glossary
T
TimeFinder/FS
une nlnuane ngualn avng ulme l ue Symmelx bune nlnuane
ume (CV) l de a a emle nl-n-lme y VNX e.
110 Using SRDF/A with VNX 7.1
Glossary
Index
A
activating SRDF/A failover 37, 59
active/passive
configuration 19, 45
failover
activating 59
initializing 22, 45
restoring 69
SRDF/A use of 22
B
business continuance configurations 26
C
capture cycle 23
cautions 52, 57, 69, 71
CLI 13
CNS-14 configuration 31
command
nas_rdf -init 47
SYMCLI 13
commands
fs_copy restriction 26
nas_rdf 40, 42
comparison of related VNX for file features 26
configuring
active/passive 22, 45
Data Movers 30
Control Station
communication 19
preinitialization 42
control volumes 20
CS0 requirement 13
cycle time 23
cycles of SRDF/A 23
D
Data Movers
checklist for 31
configuring 30
destination configuration 52
mirroring 30
source configuration 48
delta set
description 23
dependent write consistency
description 23
destination Celerra Network Server 59
destination VNX
initializing 50
running restore operation on 70
device groups
querying 59, 67
differences between NS series gateway and CNS-14
31
differences between SRDF/A and SRDF/S 25
E
editing the/etc/hosts file 53
EMC E-Lab Navigator 78
error messages 79, 84, 91, 92, 93, 98
database lock error 93
local file system on SRDF Data Mover 84
local standby for SRDF standbys 79
restore on NS series gateway 92
Using SRDF/A with VNX 7.1 111
error messages (continued)
starting restore...failed 91
F
failover 12, 37, 38, 56, 59
initiating active/passive 37, 56, 59
initiating SRDF/A 37
restoring VNX after a 38
FileMover 14
fs_copy command, no support for 26
G
graceful failover 57
H
halting Data Movers 34, 57
health check 34, 39
high-availability and replication products
Celerra Replicator (V2) 28
MirrorView/Synchronous 26
restrictions 26
SnapSure 28
SRDF/Asynchronous 27
SRDF/Synchronous 26
storage platform 26
TimeFinder/FS 27
TimeFinder/FS NearCopy and FarCopy 27
HTTP communication 51
I
initializing
active/passive 22, 45
active/passive failover 37, 56
SRDF relationship 36, 40, 42
IP subnets 69
L
license 12
limits
nas_cel passphrase 42
logical volumes 20
login account 51
M
mapping Symmetrix volumes 21
messages, error 98
mirroring Data Movers 30
MPFS 14
N
nas_rdf -activate 37
nas_rdf -init 36, 40, 42
nas_rdf -restore 38
NS series gateway configuration
Data Movers 31
error messages 92
P
passphrase 42, 51
for nas_cel preinitialization 42
physical volumes 20
postactivation script 69
preinitializing MirrorView/S relationship 42
R
receive cycle 23
Replicator 14
restore cycle 23
restoring
active/passive 69
steps on destination 71
VNX post-failover 38
S
SnapSure 32
SnapSure, restriction 13
source VNX
initializing 47
restoring 70
SRDF
configuration types 19
SRDF/A
comparison to SRDF/S 25
overview 22
starting restore...failed message 91
Symmetrix DMX systems 12
Symmetrix system configuration 21
Symmetrix volume IDs mapping Symmetrix volumes
21
synchronous mode (SRDF/S) 25
T
transmit cycle 23
112 Using SRDF/A with VNX 7.1
Index
troubleshooting 77
U
Unisphere software 13
upgrade options 34
user interface 14
V
verifying
operation after activation 63
VNX FileMover 14
volumes
configuring 30
Using SRDF/A with VNX 7.1 113
Index
114 Using SRDF/A with VNX 7.1
Index

También podría gustarte