Está en la página 1de 1000

Front cover

Draft Document for Review January 17, 2012 6:10 am SG24-7933-01

Implementing the IBM System Storage SAN Volume Controller V6.3


Install, use, and troubleshoot the SAN Volume Controller Become familiar with the exciting new GUI Learn how to use the Easy Tier function

Jon Tate Alejandro Berardinelli Christian Schroeder Mark Chitti Massimo Rosati Torben Jensen

ibm.com/redbooks

Draft Document for Review January 17, 2012 6:10 am

7933edno.fm

International Technical Support Organization IBM System Storage SAN Volume Controller V6.3 October 2011

SG24-7933-01

7933edno.fm

Draft Document for Review January 17, 2012 6:10 am

Note: Before using this information and the product it supports, read the information in Notices on page xxi.

Second Edition (October 2011) This edition applies to Version 6.3 of the IBM System Storage SAN Volume Controller. This document created or updated on January 17, 2012. Note: This book is based on a pre-GA version of a product and may not apply when the product becomes generally available. We recommend that you consult the product documentation or follow-on versions of this redbook for more current information.

Copyright International Business Machines Corporation 2011. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.

Draft Document for Review January 17, 2012 6:10 am

7933TOC.fm

Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii Summary of changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii October 2011, Second Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv The team who wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xxviii Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xxviii Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xxviii Chapter 1. Introduction to storage virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Storage virtualization terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 User requirements driving storage virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 Benefits of using the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 What is new in SVC V6.3.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 5 5 6 6

Chapter 2. IBM System Storage SAN Volume Controller. . . . . . . . . . . . . . . . . . . . . . . . . 9 2.1 Brief history of the SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.2 SVC architectural overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.2.1 SAN Volume Controller topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.3 SVC terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.4 SAN Volume Controller components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.4.1 Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.4.2 I/O Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.4.3 System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.4.4 Split cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.4.5 MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.4.6 Quorum disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.4.7 Disk tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.4.8 Storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.4.9 Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.4.10 Easy Tier performance function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.4.11 Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.4.12 Maximum supported configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.5 Volume overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.5.1 Image mode volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.5.2 Managed mode volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.5.3 Cache mode and cache-disabled volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.5.4 Mirrored volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.5.5 Thin-provisioned volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.5.6 Volume I/O governing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.6 iSCSI overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.6.1 Use of IP addresses and Ethernet ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.6.2 iSCSI volume discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.6.3 iSCSI authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.6.4 iSCSI multipathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Copyright IBM Corp. 2011. All rights reserved.

7933TOC.fm

Draft Document for Review January 17, 2012 6:10 am

2.7 Advanced Copy Services overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7.1 Synchronous/Asynchronous remote copy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7.2 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7.3 Image Mode Migration and Volume Mirroring Migration . . . . . . . . . . . . . . . . . . . . 2.8 SVC clustered system overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8.1 Quorum disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8.2 Split I/O groups or split cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8.3 Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8.4 Clustered system management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8.5 IBM System Storage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9 User authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9.1 Remote authentication via LDAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9.2 SVC user names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9.3 SVC superuser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9.4 SVC Service Assistant Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9.5 SVC roles and user groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9.6 SVC local authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9.7 SVC remote authentication and single sign-on . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.10 SVC hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.10.1 Fibre Channel interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.10.2 LAN interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.11 Solid-state drives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.11.1 Storage bottleneck problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.11.2 Solid-state drive solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.11.3 Solid-state drive market . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.11.4 Solid-state drives and SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.12 Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.12.1 Evaluation mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.12.2 Automatic data placement mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.13 What is new with SVC 6.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.13.1 SVC 6.3 supported hardware list, device driver, and firmware levels . . . . . . . . . 2.13.2 SVC 6.3.0 new features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.14 Useful SVC web links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

35 36 37 38 38 39 41 41 42 43 44 45 53 53 53 53 54 55 57 59 59 60 60 61 61 62 62 63 63 63 63 64 65

Chapter 3. Planning and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 3.1 General planning rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.2 Physical planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.2.1 Preparing your uninterruptible power supply unit environment . . . . . . . . . . . . . . . 70 3.2.2 Physical rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 3.2.3 Cable connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.3 Logical planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.3.1 Management IP addressing plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 3.3.2 SAN zoning and SAN connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 3.3.3 iSCSI IP addressing plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 3.3.4 Back-end storage subsystem configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 3.3.5 SVC clustered system configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 3.3.6 Split-cluster system configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 3.3.7 Storage Pool configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 3.3.8 Virtual disk configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 3.3.9 Host mapping (LUN masking) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 3.3.10 Advanced Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 3.3.11 SAN boot support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 3.3.12 Data migration from a non-virtualized storage subsystem . . . . . . . . . . . . . . . . 101

vi

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933TOC.fm

3.3.13 SVC configuration backup procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 SAN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 Disk subsystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.3 SVC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.4 Performance monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 4. SAN Volume Controller initial configuration . . . . . . . . . . . . . . . . . . . . . . . 4.1 Managing the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 TCP/IP requirements for SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . 4.2 System Storage Productivity Center overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 IBM System Storage Productivity Center hardware . . . . . . . . . . . . . . . . . . . . . . 4.2.2 SVC installation planning information for System Storage Productivity Center . 4.3 Setting up the SVC cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Introducing the service panels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.3 Initiating cluster creation from the front panel . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Configuring the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 Completing the Create Cluster Wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.2 Changing the default superuser password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.3 Configuring the Service IP Addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.4 Postrequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Secure Shell overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.1 Generating public and private SSH key pairs using PuTTY . . . . . . . . . . . . . . . . 4.5.2 Uploading the SSH public key to the SVC cluster. . . . . . . . . . . . . . . . . . . . . . . . 4.5.3 Configuring the PuTTY session for the CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.4 Starting the PuTTY CLI session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.5 Configuring SSH for AIX clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 Using IPv6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.1 Migrating a cluster from IPv4 to IPv6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.2 Migrating a cluster from IPv6 to IPv4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 5. Host configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Host attachment overview for IBM System Storage SAN Volume Controller . . . . . . . 5.2 SVC setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Fibre Channel and SAN setup overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Port mask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Initiators and targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.2 iSCSI Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.3 iSCSI Qualified Name (IQN) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.4 iSCSI Setup for SVC and host server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.5 Volume discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.6 Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.7 Target failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.8 Host failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.9 Additional sources of information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 AIX-specific information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.1 Configuring the AIX host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.2 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . . 5.4.3 HBAs for IBM System p hosts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.4 Configuring fast fail and dynamic tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.5 Installing the 2145 host attachment support package. . . . . . . . . . . . . . . . . . . . .

101 102 102 102 103 104 105 106 106 108 110 110 111 111 115 115 118 118 128 131 132 133 134 136 137 141 143 143 144 147 149 150 150 151 155 156 156 157 157 158 159 159 160 161 162 162 162 163 163 163 165

Contents

vii

7933TOC.fm

Draft Document for Review January 17, 2012 6:10 am

5.4.6 Subsystem Device Driver Path Control Module . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.7 Configuring assigned volume using SDDPCM . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.8 Using SDDPCM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.9 Creating and preparing volumes for use with AIX V6.1 and SDDPCM. . . . . . . . 5.4.10 Expanding an AIX volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.11 Running SVC commands from an AIX host system . . . . . . . . . . . . . . . . . . . . . 5.5 Windows-specific information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.1 Configuring Windows Server 2003, 2008, 2008 R2 hosts . . . . . . . . . . . . . . . . . 5.5.2 Configuring Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.3 Hardware lists, device driver, HBAs, and firmware levels. . . . . . . . . . . . . . . . . . 5.5.4 Host adapter installation and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.5 Changing the disk timeout on Microsoft Windows Server. . . . . . . . . . . . . . . . . . 5.5.6 Installing the SDDDSM multipath-driver on Windows . . . . . . . . . . . . . . . . . . . . . 5.5.7 Attaching SVC volumes to Windows Server 2008 R2. . . . . . . . . . . . . . . . . . . . . 5.5.8 Extending a Windows Server 2008 (R2) volume . . . . . . . . . . . . . . . . . . . . . . . . 5.5.9 Removing a disk on Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Using the SVC CLI from a Windows host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Microsoft Volume Shadow Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.1 Installation overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.2 System requirements for the IBM System Storage hardware provider . . . . . . . . 5.7.3 Installing the IBM System Storage hardware provider . . . . . . . . . . . . . . . . . . . . 5.7.4 Verifying the installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.5 Creating the free and reserved pools of volumes . . . . . . . . . . . . . . . . . . . . . . . . 5.7.6 Changing the configuration parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8 Specific Linux (on x86 / x86_64) information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.1 Configuring the Linux host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.2 Configuration information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.3 Disabling automatic Linux system updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.4 Setting queue depth with QLogic HBAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.5 Multipathing in Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.6 Creating and preparing the SDD volumes for use . . . . . . . . . . . . . . . . . . . . . . . 5.8.7 Using the operating system Device Mapper Multipath (DM-MPIO) . . . . . . . . . . 5.8.8 Creating and preparing DM-MPIO volumes for use . . . . . . . . . . . . . . . . . . . . . . 5.9 VMware configuration information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.1 Configuring VMware hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.2 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . . 5.9.3 HBAs for hosts running VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.4 VMware storage and zoning guidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.5 Setting the HBA timeout for failover in VMware . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.6 Multipathing in ESX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.7 Attaching VMware to volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.8 Volume naming in VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.9 Setting the Microsoft guest operating system timeout . . . . . . . . . . . . . . . . . . . . 5.9.10 Extending a VMFS volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.11 Removing a datastore from an ESX host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10 Sun Solaris support information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10.1 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . 5.10.2 SDD dynamic pathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11 Hewlett-Packard UNIX configuration information . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.1 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . 5.11.2 Multipath solutions supported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.3 Coexistence of SDD and PV Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.4 Using an SVC volume as a cluster lock disk. . . . . . . . . . . . . . . . . . . . . . . . . . . viii
IBM System Storage SAN Volume Controller V6.3

165 166 169 170 170 171 171 172 172 172 173 173 173 176 182 187 190 191 191 192 192 195 196 197 199 199 200 200 200 201 205 207 207 211 211 212 212 212 213 214 214 217 218 218 220 221 221 221 222 222 222 222 223

Draft Document for Review January 17, 2012 6:10 am

7933TOC.fm

5.11.5 Support for HP-UX with greater than eight LUNs . . . . . . . . . . . . . . . . . . . . . . . 5.12 Using SDDDSM, SDDPCM, and SDD web interface . . . . . . . . . . . . . . . . . . . . . . . . 5.13 Calculating the queue depth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.14 Further sources of information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.14.1 Publications containing SVC storage subsystem attachment guidelines . . . . . Chapter 6. Data migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Migration overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Migration operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Migrating multiple extents (within a storage pool) . . . . . . . . . . . . . . . . . . . . . . . . 6.2.2 Migrating extents off an MDisk that is being deleted. . . . . . . . . . . . . . . . . . . . . . 6.2.3 Migrating a volume between storage pools. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.4 Migrating the volume to image mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.5 Migrating a volume between I/O Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.6 Monitoring the migration progress. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Functional overview of migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 Parallelism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.2 Error handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.3 Migration algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Migrating data from an image mode volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1 Image mode volume migration concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.2 Migration tips. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Data migration for Windows using the SVC GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.1 Windows Server 2008 host system connected directly to the LSI 3500 . . . . . . . 6.5.2 Adding the SVC between the host system and the LSI 3500 . . . . . . . . . . . . . . . 6.5.3 Importing the migrated disks into an online Windows Server 2008 host. . . . . . . 6.5.4 Adding the SVC between the host and LSI3500 using the CLI . . . . . . . . . . . . . 6.5.5 Migrating a volume from managed mode to image mode. . . . . . . . . . . . . . . . . . 6.5.6 Migrating the volume from image mode to image mode . . . . . . . . . . . . . . . . . . . 6.5.7 Removing image mode data from the SVC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.8 Map the free disks onto the Windows Server 2008. . . . . . . . . . . . . . . . . . . . . . . 6.6 Migrating Linux SAN disks to SVC disks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.1 Connecting the SVC to your SAN fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.2 Preparing your SVC to virtualize disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.3 Moving the LUNs to the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.4 Migrating the image mode volumes to managed MDisks . . . . . . . . . . . . . . . . . . 6.6.5 Preparing to migrate from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.6 Migrating the volumes to image mode volumes . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.7 Removing the LUNs from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7 Migrating ESX SAN disks to SVC disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.1 Connecting the SVC to your SAN fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.2 Preparing your SVC to virtualize disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.3 Moving the LUNs to the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.4 Migrating the image mode volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.5 Preparing to migrate from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.6 Migrating the managed volumes to image mode volumes . . . . . . . . . . . . . . . . . 6.7.7 Removing the LUNs from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8 Migrating AIX SAN disks to SVC volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8.1 Connecting the SVC to your SAN fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8.2 Preparing your SVC to virtualize disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8.3 Moving the LUNs to the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8.4 Migrating image mode volumes to volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8.5 Preparing to migrate from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

223 223 224 225 225 227 228 228 228 229 229 230 231 232 232 232 233 233 235 235 237 237 238 241 257 260 263 268 278 281 283 285 286 290 293 296 299 300 303 304 306 309 312 315 317 318 321 323 324 329 331 333

Contents

ix

7933TOC.fm

Draft Document for Review January 17, 2012 6:10 am

6.8.6 Migrating the managed volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8.7 Removing the LUNs from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.9 Using SVC for storage migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.10 Using volume mirroring and thin-provisioned volumes together . . . . . . . . . . . . . . . . 6.10.1 Zero detect feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.10.2 Volume mirroring with thin-provisioned volumes. . . . . . . . . . . . . . . . . . . . . . . . Chapter 7. Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Overview of Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Easy Tier concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 SSD arrays and MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.2 Disk tiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.3 Single tier storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.4 Multiple tier storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.5 Easy Tier process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.6 Easy Tier operating modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.7 Easy Tier activation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Easy Tier implementation considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.2 Implementation rules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.3 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Measuring and activating Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.1 Measuring by using the Storage Advisor Tool . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 SSD implementation and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.1 Mirrored configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.2 Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.3 Striped. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6 Using Easy Tier with the SVC CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.1 Initial cluster status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.2 Turning on Easy Tier evaluation mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.3 Creating a multitier storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.4 Setting the disk tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.5 Checking a volumes Easy Tier mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.6 Final cluster status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7 Using Easy Tier with the SVC GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.1 Setting the disk tier on MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.2 Checking Easy Tier status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 8. Advanced Copy Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.1 Business Requirements for FlashCopy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.2 Backup Improvements with FlashCopy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.3 Restore with FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.4 Moving and migrating data with FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.5 Application testing with FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.6 Host and Application considerations to ensure FlashCopy integrity . . . . . . . . . . 8.1.7 FlashCopy attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Reverse FlashCopy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.1 FlashCopy and Tivoli Storage FlashCopy Manager . . . . . . . . . . . . . . . . . . . . . . 8.3 FlashCopy functional overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Implementing SVC FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.1 FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.2 Multiple Target FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

336 337 340 341 341 343 349 350 350 350 351 351 351 352 353 354 355 355 355 356 356 357 359 360 362 363 365 365 365 367 368 368 369 369 370 372 373 374 374 374 375 375 375 376 376 377 378 381 381 382 382

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933TOC.fm

8.4.3 Consistency Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.4 FlashCopy indirection layer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.5 Grains and the FlashCopy bitmap. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.6 Interaction and dependency between Multiple Target FlashCopy mappings . . . 8.4.7 Summary of the FlashCopy indirection layer algorithm. . . . . . . . . . . . . . . . . . . . 8.4.8 Interaction with the cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.9 FlashCopy and image mode volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.10 FlashCopy mapping events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.11 FlashCopy mapping states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.12 Thin-provisioned FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.13 Background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.14 Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.15 Serialization of I/O by FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.16 Event handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.17 Asynchronous notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.18 Interoperation with Metro Mirror and Global Mirror . . . . . . . . . . . . . . . . . . . . . . 8.4.19 FlashCopy presets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5 Volume Mirroring and migration options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6 Metro Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.1 Metro Mirror overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.2 Remote copy techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.3 Metro Mirror features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.4 Multiple Cluster Mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.5 Importance of write ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.6 Remote copy intercluster communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.7 Metro Mirror attributes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.8 Methods of synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.9 Metro Mirror states and events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.10 Practical use of Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.11 Valid combinations of FlashCopy, Metro Mirror, and Global Mirror . . . . . . . . . 8.6.12 Metro Mirror configuration limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7 Metro Mirror commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.1 Listing available SVC cluster partners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.2 Creating the SVC cluster partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.3 Creating a Metro Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.4 Creating a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.5 Changing a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.6 Changing a Metro Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.7 Starting a Metro Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.8 Stopping a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.9 Starting a Metro Mirror Consistency Group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.10 Stopping a Metro Mirror Consistency Group. . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.11 Deleting a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.12 Deleting a Metro Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.13 Reversing a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.14 Reversing a Metro Mirror Consistency Group. . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.15 Background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8 Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8.1 Intracluster Global Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8.2 Intercluster Global Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8.3 Asynchronous remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8.4 SVC Global Mirror features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8.5 Global Mirror relationship between master and auxiliary volumes . . . . . . . . . . .
Contents

383 385 386 387 389 389 390 391 393 395 396 397 397 397 398 399 399 400 410 410 411 412 413 416 418 419 419 420 427 428 428 428 429 429 430 430 431 431 432 432 433 433 433 434 434 434 434 435 435 435 435 436 438 xi

7933TOC.fm

Draft Document for Review January 17, 2012 6:10 am

8.8.6 Using Change Volumes with Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8.7 Importance of write ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8.8 Global Mirror Consistency Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8.9 Distribution of work among nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8.10 Background copy performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8.11 Thin-provisioned background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9 Global Mirror process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.1 Methods of synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.2 Global Mirror states and events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.3 Practical use of Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.4 Global Mirror configuration limits. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.10 Global Mirror commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.10.1 Listing the available SVC cluster partners . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.10.2 Creating an SVC cluster partnership. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.10.3 Creating a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . 8.10.4 Creating a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.10.5 Changing a Global Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.10.6 Changing a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . 8.10.7 Starting a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.10.8 Stopping a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.10.9 Starting a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.10.10 Stopping a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . 8.10.11 Deleting a Global Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.10.12 Deleting a Global Mirror Consistency Group. . . . . . . . . . . . . . . . . . . . . . . . . . 8.10.13 Reversing a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.10.14 Reversing a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . 8.11 Troubleshooting Remote Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.11.1 1920 error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.11.2 1720 error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 9. SAN Volume Controller operations using the command-line interface. . 9.1 Normal operations using CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.1 Command syntax and online help. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Working with managed disks and disk controller systems . . . . . . . . . . . . . . . . . . . . . 9.2.1 Viewing disk controller details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.2 Renaming a controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.3 Discovery status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.4 Discovering MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.5 Viewing MDisk information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.6 Renaming an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.7 Including an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.8 Adding MDisks to a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.9 Showing MDisks in a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.10 Working with a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.11 Creating a storage pool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.12 Viewing storage pool information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.13 Renaming a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.14 Deleting a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.15 Removing MDisks from a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Working with hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.1 Creating a Fibre Channel-attached host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.2 Creating an iSCSI-attached host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.3 Modifying a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

439 442 442 444 444 445 445 445 446 454 455 455 456 459 460 460 460 461 461 461 462 462 462 463 463 463 464 464 466 467 468 468 470 470 471 471 471 473 474 474 476 476 476 476 478 479 479 480 480 480 481 483

xii

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933TOC.fm

9.3.4 Deleting a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.5 Adding ports to a defined host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.6 Deleting ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 Working with the Ethernet port for iscsi. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5 Working with volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.1 Creating a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.2 Volume information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.3 Creating a thin-provisioned volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.4 Creating a volume in image mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.5 Adding a mirrored volume copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.6 Splitting a mirrored volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.7 Modifying a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.8 I/O governing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.9 Deleting a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.10 Expanding a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.11 Assigning a volume to a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.12 Showing volumes to host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.13 Deleting a volume to host mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.14 Migrating a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.15 Migrating a fully managed volume to an image mode volume . . . . . . . . . . . . . 9.5.16 Shrinking a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.17 Showing a volume on an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.18 Showing which volumes are using a storage pool . . . . . . . . . . . . . . . . . . . . . . 9.5.19 Showing which MDisks are used by a specific volume . . . . . . . . . . . . . . . . . . . 9.5.20 Showing from which storage pool a volume has its extents . . . . . . . . . . . . . . . 9.5.21 Showing the host to which the volume is mapped . . . . . . . . . . . . . . . . . . . . . . 9.5.22 Showing the volume to which the host is mapped . . . . . . . . . . . . . . . . . . . . . . 9.5.23 Tracing a volume from a host back to its physical disk . . . . . . . . . . . . . . . . . . . 9.6 Scripting under the CLI for SVC task automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6.1 Scripting structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7 SVC advanced operations using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7.1 Command syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7.2 Organizing on window content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8 Managing the clustered system using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.1 Viewing clustered system properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.2 Changing system settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.3 iSCSI configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.4 Modifying IP addresses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.5 Supported IP address formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.6 Setting the clustered system time zone and time . . . . . . . . . . . . . . . . . . . . . . . . 9.8.7 Starting statistics collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.8 Determining the status of a copy operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.9 Shutting down a clustered system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.9 Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.9.1 Viewing node details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.9.2 Adding a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.9.3 Renaming a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.9.4 Deleting a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.9.5 Shutting down a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.10 I/O Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.10.1 Viewing I/O Group details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.10.2 Renaming an I/O Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.10.3 Adding and removing hostiogrp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents

484 484 485 486 487 487 489 491 491 492 496 497 498 500 500 501 503 503 503 504 505 506 506 507 507 508 508 509 511 511 515 515 515 518 518 520 520 521 522 522 524 524 524 526 526 527 528 528 529 531 531 531 531 xiii

7933TOC.fm

Draft Document for Review January 17, 2012 6:10 am

9.10.4 Listing I/O Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.11 Managing authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.11.1 Managing users using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.11.2 Managing user roles and groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.11.3 Changing a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.11.4 Audit log command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12 Managing Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.1 FlashCopy operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.2 Setting up FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.3 Creating a FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.4 Creating a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.5 Preparing (pre-triggering) the FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . 9.12.6 Preparing (pre-triggering) the FlashCopy Consistency Group . . . . . . . . . . . . . 9.12.7 Starting (triggering) FlashCopy mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.8 Starting (triggering) FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . 9.12.9 Monitoring the FlashCopy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.10 Stopping the FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.11 Stopping the FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.12 Deleting the FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.13 Deleting the FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.14 Migrating a volume to a thin-provisioned volume . . . . . . . . . . . . . . . . . . . . . . 9.12.15 Reverse FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.16 Split-stopping of FlashCopy maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.13 Metro Mirror operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.13.1 Setting up Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.13.2 Creating an SVC partnership between ITSO_SVC1 and ITSO_SVC4 . . . . . . . 9.13.3 Creating a Metro Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.13.4 Creating the Metro Mirror relationships. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.13.5 Creating a stand-alone Metro Mirror relationship for MM_App_Pri. . . . . . . . . . 9.13.6 Starting Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.13.7 Starting a Metro Mirror Consistency Group. . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.13.8 Monitoring the background copy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.13.9 Stopping and restarting Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.13.10 Stopping a stand-alone Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . 9.13.11 Stopping a Metro Mirror Consistency Group. . . . . . . . . . . . . . . . . . . . . . . . . . 9.13.12 Restarting a Metro Mirror relationship in the Idling state. . . . . . . . . . . . . . . . . 9.13.13 Restarting a Metro Mirror Consistency Group in the Idling state . . . . . . . . . . 9.13.14 Changing copy direction for Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.13.15 Switching copy direction for a Metro Mirror relationship . . . . . . . . . . . . . . . . . 9.13.16 Switching copy direction for a Metro Mirror Consistency Group . . . . . . . . . . . 9.13.17 Creating an SVC partnership among many clustered systems. . . . . . . . . . . . 9.13.18 Star configuration partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.14 Global Mirror operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.14.1 Setting up Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.14.2 Creating an SVC partnership between ITSO_SVC1 and ITSO_SVC4 . . . . . . . 9.14.3 Changing link tolerance and system delay simulation . . . . . . . . . . . . . . . . . . . 9.14.4 Creating a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . 9.14.5 Creating Global Mirror relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.14.6 Creating the stand-alone Global Mirror relationship for GM_App_Pri. . . . . . . . 9.14.7 Starting Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.14.8 Starting a stand-alone Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . 9.14.9 Starting a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.14.10 Monitoring background copy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
IBM System Storage SAN Volume Controller V6.3

532 534 534 535 536 536 538 538 539 539 540 541 542 543 544 545 545 546 547 548 548 552 554 555 556 557 560 560 561 563 563 564 566 566 566 567 568 569 569 570 571 572 579 580 580 582 584 584 585 585 586 586 587

Draft Document for Review January 17, 2012 6:10 am

7933TOC.fm

9.14.11 Stopping and restarting Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589 9.14.12 Stopping a stand-alone Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . 589 9.14.13 Stopping a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . 590 9.14.14 Restarting a Global Mirror relationship in the Idling state . . . . . . . . . . . . . . . . 591 9.14.15 Restarting a Global Mirror Consistency Group in the Idling state . . . . . . . . . . 591 9.14.16 Changing direction for Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592 9.14.17 Switching copy direction for a Global Mirror relationship . . . . . . . . . . . . . . . . 592 9.14.18 Switching copy direction for a Global Mirror Consistency Group . . . . . . . . . . 593 9.14.19 Changing a GM relationship to cycling mode . . . . . . . . . . . . . . . . . . . . . . . . . 595 9.14.20 Create thin provisioned change volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596 9.14.21 Stop standalone remote copy relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . 597 9.14.22 Set cycling mode on standalone remote copy relationship . . . . . . . . . . . . . . . 597 9.14.23 Set change volume on master volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 598 9.14.24 Set change volume on auxiliary volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 598 9.14.25 Start standalone relationship in cycling mode. . . . . . . . . . . . . . . . . . . . . . . . . 599 9.14.26 Stop Consistency Group to change the cycling mode . . . . . . . . . . . . . . . . . . 600 9.14.27 Set cycling mode on Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601 9.14.28 Set change volume on master volume relationships of the Consistency Group . . 601 9.14.29 Set change volume on auxiliary volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603 9.14.30 Start Consistency Group CG_W2K3_GM in cycling mode . . . . . . . . . . . . . . . 604 9.15 Service and maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 605 9.15.1 Upgrading software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 605 9.15.2 Running maintenance procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 610 9.15.3 Setting up SNMP notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613 9.15.4 Set syslog event notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613 9.15.5 Configuring error notification using an email server . . . . . . . . . . . . . . . . . . . . . 614 9.15.6 Analyzing the event log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 614 9.15.7 License settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616 9.15.8 Listing dumps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 617 9.16 Backing up the SVC system configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621 9.16.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621 9.17 Restoring the SVC clustered system configuration . . . . . . . . . . . . . . . . . . . . . . . . . . 622 9.17.1 Deleting configuration backup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623 9.18 Working with the SVC Quorum MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 624 9.18.1 Listing the SVC Quorum MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 624 9.18.2 Changing the SVC Quorum Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 624 9.19 Working with the Service Assistant menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 626 9.19.1 SVC CLI Service Assistant menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 626 9.20 SAN troubleshooting and data collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 627 9.21 T3 recovery process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 629 Chapter 10. SAN Volume Controller operations using the GUI. . . . . . . . . . . . . . . . . . 10.1 SVC normal operations using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.1 Introduction to SVC normal operations using the GUI . . . . . . . . . . . . . . . . . . . 10.1.2 Organizing on window content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.3 Help. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Working with External Disk Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.1 Viewing Disk Controller details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.2 Renaming a disk controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.3 Discovering MDisks from the External panel . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Working with Storage Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.1 Viewing Storage Pool information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 631 632 632 636 641 641 641 642 643 643 644

Contents

xv

7933TOC.fm

Draft Document for Review January 17, 2012 6:10 am

10.3.2 Discovering MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.3 Creating Storage Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.4 Renaming a Storage Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.5 Deleting a Storage Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.6 Adding or removing MDisks from a Storage Pool . . . . . . . . . . . . . . . . . . . . . . . 10.3.7 Showing the volumes that are associated with a Storage Pool . . . . . . . . . . . . 10.4 Working with managed disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.1 MDisk information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.2 Renaming an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.3 Discovering MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.4 Adding MDisks to a Storage Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.5 Removing MDisks from a Storage Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.6 Including an excluded MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.7 Activating EasyTier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5 Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6 Working with hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.1 Host information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.2 Creating a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.3 Renaming a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.4 Modifying a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.5 Deleting a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.6 Adding ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.7 Deleting ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.8 Creating or modifying the host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.9 Deleting a host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.10 Deleting all host mappings for a given host . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7 Working with volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.1 Volume information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.2 Creating a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.3 Renaming a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.4 Modifying a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.5 Modifying thin-provisioning volume properties . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.6 Deleting a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.7 Creating or modifying the host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.8 Deleting a host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.9 Deleting all host mappings for a given volume . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.10 Shrinking a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.11 Expanding a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.12 Shrinking the real capacity of a thin-provisioned volume . . . . . . . . . . . . . . . . 10.7.13 Expanding the real capacity of a thin provisioned volume . . . . . . . . . . . . . . . 10.7.14 Migrating a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.15 Adding a mirrored copy to an existing volume . . . . . . . . . . . . . . . . . . . . . . . . 10.7.16 Deleting a mirrored copy from a volume mirror. . . . . . . . . . . . . . . . . . . . . . . . 10.7.17 Splitting a volume copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.18 Validating volume copies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.19 Migrating to a thin-provisioned volume using volume mirroring . . . . . . . . . . . 10.7.20 Creating a volume in image mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.21 Migrating a volume to an image mode volume . . . . . . . . . . . . . . . . . . . . . . . . 10.7.22 Creating an image mode mirrored volume . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8 Copy Services: managing FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.1 Creating a FlashCopy Mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.2 Creating and starting a snapshot preset with a single click . . . . . . . . . . . . . . . 10.8.3 Creating and starting a clone preset with a single click . . . . . . . . . . . . . . . . . . xvi
IBM System Storage SAN Volume Controller V6.3

645 645 648 649 650 650 650 650 652 653 654 655 656 657 659 659 661 663 668 669 670 671 674 676 678 678 679 681 684 691 692 694 697 698 700 704 705 707 709 712 713 716 719 720 722 723 726 726 726 726 728 739 741

Draft Document for Review January 17, 2012 6:10 am

7933TOC.fm

10.8.4 Creating and starting a backup preset with a single click . . . . . . . . . . . . . . . . . 10.8.5 Creating a FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.6 Creating FlashCopy mappings in a Consistency Group . . . . . . . . . . . . . . . . . . 10.8.7 Show Dependent Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.8 Moving a FlashCopy mapping to a Consistency Group . . . . . . . . . . . . . . . . . . 10.8.9 Removing a FlashCopy mapping from a Consistency Group . . . . . . . . . . . . . . 10.8.10 Modifying a FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.11 Renaming a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.12 Renaming a Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.13 Deleting a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.14 Deleting a FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.15 Starting FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.16 Starting a FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.17 Stopping the FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.18 Stopping the FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.19 Migrating between a fully allocated volume and a Space-Efficient volume. . . 10.8.20 Reversing and splitting a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . 10.9 Copy Services: managing Remote Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.9.1 Cluster partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.9.2 Creating the SVC partnership between two remote SVC Clusters . . . . . . . . . . 10.9.3 Creating stand-alone remote copy relationships. . . . . . . . . . . . . . . . . . . . . . . . 10.9.4 Creating a Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.9.5 Renaming a Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.9.6 Renaming a Remote Copy relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.9.7 Moving a stand-alone Remote Copy relationship to a Consistency Group. . . . 10.9.8 Removing Remote Copy relationship from a Consistency Group. . . . . . . . . . . 10.9.9 Starting a Remote Copy relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.9.10 Starting a Remote Copy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . 10.9.11 Switching the copy direction for a Remote Copy relationship . . . . . . . . . . . . . 10.9.12 Switching the copy direction for a Consistency Group . . . . . . . . . . . . . . . . . . 10.9.13 Stopping a Remote Copy relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.9.14 Stopping a Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.9.15 Deleting stand-alone Remote Copy relationships . . . . . . . . . . . . . . . . . . . . . . 10.9.16 Deleting a Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.10 Managing the cluster using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.10.1 System Status information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.10.2 View I/O groups and their associated nodes. . . . . . . . . . . . . . . . . . . . . . . . . . 10.10.3 View cluster properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.10.4 Renaming an SVC cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.10.5 Shutting down a cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.10.6 Upgrading software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.11 Managing I/O Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.11.1 View I/O group properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.11.2 Modifying I/O group properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.12 Managing nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.12.1 View node properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.12.2 Renaming a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.12.3 Adding a node to the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.12.4 Removing a node from the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.13 Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.13.1 Monitoring panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.13.2 Event Log panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.13.3 Run fix procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents

743 745 747 751 752 753 754 755 756 757 758 759 760 761 763 763 764 764 766 768 770 773 778 778 779 780 781 783 784 786 787 788 790 790 791 791 793 793 794 795 797 798 798 799 801 801 802 804 805 807 807 810 817 xvii

7933TOC.fm

Draft Document for Review January 17, 2012 6:10 am

10.13.4 Support panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.14 User Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.14.1 Creating a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.14.2 Modifying user properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.14.3 Removing a user password. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.14.4 Removing a user SSH Public Key. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.14.5 Deleting a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.14.6 Creating a user group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.14.7 Modifying user group properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.14.8 Deleting a user group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.14.9 Audit log information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.1 Configuring the Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.2 Configuring the Service IP addresses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.3 iSCSI configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.4 Fibre Channel information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.5 Event notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.6 Email notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.7 SNMP notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.8 Using the General panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.9 Date and Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.10 Licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.11 Upgrading software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.12 Setting GUI Preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.16 Upgrading SVC software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.16.1 Precautions before upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.16.2 SVC software upgrade test utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.16.3 Upgrade procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17 Service Assistant with the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.1 Placing an SVC node into Service State. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.2 Exiting an SVC node from Service State . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.3 Rebooting an SVC node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.4 Collect Logs page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.5 Manage Cluster page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.6 Recover Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.7 Reinstall software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.8 Upgrade Manually . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.9 Modify WWNN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.10 Change Service IP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.11 Configure CLI access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.12 Restart Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix A. Performance data and statistics gathering. . . . . . . . . . . . . . . . . . . . . . . SVC performance overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SVC performance perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Performance monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Collecting performance statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Real-Time Performance Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Performance data collection and Tivoli Storage Productivity Center for Disk . . . . . . . .

819 824 826 827 829 830 831 832 834 836 837 841 841 843 844 846 847 847 849 852 852 853 854 854 855 856 856 857 863 866 868 870 871 872 873 873 874 875 875 876 876 879 880 880 881 881 881 884 889

Appendix B. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 891 Commonly encountered terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 892

xviii

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933TOC.fm

Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Split I/O Group overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . No ISL Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ISL Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Diagnosis and recovery planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Diagnosis guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Diagnosis Guidelines for NO ISL configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Diagnosis Guidelines for ISL configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Recovery guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . What do you need to supply to recover the Split I/O Group configuration . . . . . . . . . . Recovery Guidelines for No ISL configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Recovery Guidelines for ISL configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

899 900 901 901 906 909 916 916 934 935 936 937 948 951 951 951 952 953

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 955

Contents

xix

7933TOC.fm

Draft Document for Review January 17, 2012 6:10 am

xx

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933spec.fm

Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.

Copyright IBM Corp. 2011. All rights reserved.

xxi

7933spec.fm

Draft Document for Review January 17, 2012 6:10 am

Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. These and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol ( or ), indicating US registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:
AIX 5L AIX DB2 developerWorks DS4000 DS8000 FlashCopy GPFS IBM Systems Director Active Energy Manager IBM Power Systems Redbooks Redbooks (logo) System p System Storage DS System Storage System x Tivoli TotalStorage WebSphere XIV

The following terms are trademarks of other companies: Intel Xeon, Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates. UNIX is a registered trademark of The Open Group in the United States and other countries. Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others.

xxii

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933chang.fm

Summary of changes
This section describes the technical changes made in this edition of the book and in previous editions. This edition might also include minor corrections and editorial changes that are not identified. Summary of Changes for SG24-7933-01 for IBM System Storage SAN Volume Controller V6.3 as created or updated on January 17, 2012.

October 2011, Second Edition


This revision reflects the addition, deletion, or modification of new and changed information described below.

New information
Split cluster I/O Groups

Changed information
Screen captures all at 6.3 level

Copyright IBM Corp. 2011. All rights reserved.

xxiii

7933chang.fm

Draft Document for Review January 17, 2012 6:10 am

xxiv

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933pref.fm

Preface
This IBM Redbooks publication is a detailed technical guide to the IBM System Storage SAN Volume Controller (SVC) Version 6.3.0. SAN Volume Controller is a virtualization appliance solution which maps virtualized volumes that are visible to hosts and applications to physical volumes on storage devices. Each server within the storage area network (SAN) has its own set of virtual storage addresses that are mapped to physical addresses. If the physical addresses change, the server continues running using the same virtual addresses that it had before. Therefore, volumes or storage can be added or moved while the server is still running. The IBM virtualization technology improves the management of information at the block level in a network, thus enabling applications and servers to share storage devices on a network. This book is intended for readers who need to implement the SVC at a 6.3.0 release level with a minimum of effort.

The team who wrote this book


This book was produced by a team of specialists from around the world working at the International Technical Support Organization, San Jose Center. Jon Tate is a Project Manager for IBM System Storage SAN Solutions at the International Technical Support Organization, San Jose Center. Before joining the ITSO in 1999, he worked in the IBM Technical Support Center, providing Level 2 support for IBM storage products. Jon has 26 years of experience in storage software and management, services, and support, and is both an IBM Certified IT Specialist and an IBM SAN Certified Specialist. He is also the UK Chairman of the Storage Networking Industry Association. Alejandro Berardinelli is an IT Storage Specialist with IBM in Uruguay since 2005. Today his primary focus is IBM storage implementations involving IBM DS8000, DS5000, V7000, Tape subsystems and Brocade and CISCO switches. He also works with Tivoli Storage Manager and Tivoli Storage Productivity Center deployments and support. He has provided storage support for several customers in South America. Alejandro holds a degree in Computer Engineer from UdelaR and has also coauthored others IBM Redbooks publications. Mark Chitti is an IBM Expert Certified IT Specialist and an Open Group Master Certified IT Specialist. He currently holds a position as team lead for approximately one quarter of the account storage architects within Integrated Technology Delivery. Mark joined IBM in 2001 having been a sub-contractor to IBM for just under a year prior to that. Since joining IBM, Mark has never ventured outside ITD's Storage Service Line, but has held several positions within it, during his career thus far. In 2004, Mark moved from prior delivery roles to the Architecture area. He is currently working toward his Senior Technical Staff Member appointment within IBM and performs an "Acting STSM" function in addition to his daily duties while he gains the experience needed to formally obtain his STSM appointment. Torben Jensen is an IT Specialist at IBM Global Technology Services, Copenhagen, Denmark. He joined IBM in 1999 for an apprenticeship as an IT-System Supporter. from 2001

Copyright IBM Corp. 2011. All rights reserved.

xxv

7933pref.fm

Draft Document for Review January 17, 2012 6:10 am

till 2005 he was the client representative for IBM's Internal Client platforms in Denmark. Torben started to work in the SAN/DISK for open systems department in March 2005. Torben provides daily and ongoing support as well as working with SAN designs and solutions for customers. Massimo Rosati is a Certified ITS Senior Storage and SAN Software Specialist at IBM Italy. He has 26 years of experience in the delivery of Professional Services and SW Support. His areas of expertise include storage hardware, Storage Area Networks, storage virtualization, disaster recovery and business continuity solutions. He has written other IBM Redbooks on storage virtualization products. Christian Schroeder is a Storage and SAN support specialist at the Technical Support and Competence Center (TSCC) in IBM Germany, and he has been with IBM since 1999. Before he joined the TSCC for IBM Systems Storage he used to work as a support specialist for IBM System x servers and provided EMEA Level 2 support for IBM BladeCenter solutions. Figure 1 shows the authors (Mark Chitti not pictured).

Figure 1 Authors, L-R, Jon, Alejandro, Massimo, Torben, and Christian

This book was produced by a team of specialists from around the world working at Brocade Communications Systems, San Jose, and the International Technical Support Organization, San Jose Center. We extend our thanks to the following people for their contributions to this project, including the development and PFE teams in Hursley. In particular, we thank the previous authors of versions of this book: Matt Amanat Pall Beck Angelo Bernasconi Alexandre Chabrol Steve Cody Sean Crawford Peter Crowhurst Sameer Dhulekar Werner Eggli Frank Enders Katja Gebuhr Deon George Amarnath Hiriyannappa xxvi
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933pref.fm

Thorsten Hoss Juerg Hossli Philippe Jachimczyk Kamalakkannan J Jayaraman Dan Koeck Bent Lerager Ian MacQuarrie Craig McKenna Andy McManus Joao Marcos Leite Barry Mellish Suad Musovich Massimo Rosati Fred Scholten Robert Symons Marcus Thordal Xiao Peng Zhao Thanks also to the following people for their contributions to previous editions, and to those who contributed to this edition: Chris Canto Peter Eccles Huw Francis Carlos Fuente Alex Howell Colin Jewell Neil Kirkland Geoff Lane Andrew Martin Paul Merrison Evelyn Perez Steve Randle Lucy Harris (nee Raw) Greg Shepherd Bill Scales Matt Smith Barry Whyte Muhammad Zubair IBM Hursley Marc Bruni IBM Houston Larry Chiu Paul Muench IBM Almaden Bill Wiegand IBM Advanced Technical Support Sharon Wang IBM Chicago Chris Saul IBM San Jose
Preface

xxvii

7933pref.fm

Draft Document for Review January 17, 2012 6:10 am

Tina Sampson IBM Tucson Sangam Racherla IBM ITSO Special thanks to the Brocade staff for their unparalleled support of this residency in terms of equipment and support in many areas: Jim Baldyga Mansi Botadra Yong Choi Silviano Gaona Brian Steffler Marcus Thordal Steven Tong Brocade Communications Systems

Now you can become a published author, too!


Heres an opportunity to spotlight your skills, grow your career, and become a published authorall at the same time! Join an ITSO residency project and help write a book in your area of expertise, while honing your experience using leading-edge technologies. Your efforts will help to increase product acceptance and customer satisfaction, as you expand your network of technical contacts and relationships. Residencies run from two to six weeks in length, and you can participate either in person or as a remote resident working from your home base. Find out more about the residency program, browse the residency index, and apply online at: ibm.com/redbooks/residencies.html

Comments welcome
Your comments are important to us! We want our books to be as helpful as possible. Send us your comments about this book or other IBM Redbooks publications in one of the following ways: Use the online Contact us review Redbooks form found at: ibm.com/redbooks Send your comments in an email to: redbooks@us.ibm.com Mail your comments to: IBM Corporation, International Technical Support Organization Dept. HYTD Mail Station P099 2455 South Road Poughkeepsie, NY 12601-5400

xxviii

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933pref.fm

Stay connected to IBM Redbooks


Find us on Facebook: http://www.facebook.com/IBMRedbooks Follow us on Twitter: http://twitter.com/ibmredbooks Look for us on LinkedIn: http://www.linkedin.com/groups?home=&gid=2130806 Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks weekly newsletter: https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm Stay current on recent Redbooks publications with RSS Feeds: http://www.redbooks.ibm.com/rss.html

Preface

xxix

7933pref.fm

Draft Document for Review January 17, 2012 6:10 am

xxx

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 01 Introduction Massimo.fm

Chapter 1.

Introduction to storage virtualization


In this chapter we define the concept of storage virtualization, and then present an overview explaining how you can apply virtualization to help address todays challenging storage requirements.

Copyright IBM Corp. 2011. All rights reserved.

7933 01 Introduction Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

1.1 Storage virtualization terminology


Although storage virtualization is a term that is used extensively throughout the storage industry, it can be applied to a wide range of technologies and underlying capabilities. In reality, most storage devices can technically claim to be virtualized in one form or another. Therefore, we must start by defining the concept of storage virtualization as used in this book. This is how IBM defines storage virtualization: Storage virtualization is a technology that makes one set of resources look and feel like another set of resources, preferably with more desirable characteristics. It is a logical representation of resources that is not constrained by physical limitations: It hides part of the complexity. It adds or integrates new function with existing services. It can be nested or applied to multiple layers of a system. When discussing storage virtualization, it is important to understand that virtualization can be implemented at various layers within the I/O stack. We have to clearly distinguish between virtualization at the disk layer and virtualization at the file system layer. The focus of this book is virtualization at the disk layer, which is more specifically referred to as block-level virtualization, or block aggregation layer. A discussion of file system virtualization is beyond the scope of this book. However, if you are interested in file system virtualization, refer to IBM General Parallel File System (GPFS) or IBM Scale Out Network Attached Storage (SONAS), which is based on GPFS. To obtain more information and an overview of GPFS, visit the following website: http://www-03.ibm.com/systems/software/gpfs/ To obtain more information about SONAS, visit the following website: http://www-03.ibm.com/systems/storage/network/sonas/ The Storage Networking Industry Associations (SNIA) block aggregation model (Figure 1-1 on page 3) provides a useful overview of the storage domain and its layers. The figure shows the three layers of a storage domain: the file, the block aggregation, and the block subsystem layers. The model splits the block aggregation layer into three sublayers. Block aggregation can be realized within hosts (servers), in the storage network (storage routers and storage controllers), or in storage devices (intelligent disk arrays). The IBM implementation of a block aggregation solution is the IBM System Storage SAN Volume Controller (SVC). The SVC is implemented as a clustered appliance in the storage network layer. Chapter 2, IBM System Storage SAN Volume Controller on page 9 explains the reasons why IBM chose to implement its IBM System Storage SAN Volume Controller in the storage network layer.

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 01 Introduction Massimo.fm

Figure 1-1 SNIA block aggregation model

The key concept of virtualization is to decouple the storage from the storage functions required in todays storage area network (SAN) environment.

Decoupling means abstracting the physical location of data from the logical representation of the data. The virtualization engine presents logical entities to the user and internally manages the process of mapping these entities to the actual location of the physical storage.
The actual mapping performed is dependent upon the specific implementation, as is the granularity of the mapping, which can range from a small fraction of a physical disk, up to the full capacity of a physical disk. A single block of information in this environment is identified by its logical unit number (LUN), which is the physical disk, and an offset within that LUN, which is known as a logical block address (LBA). Note that the term physical disk is used in this context to describe a piece of storage that might be carved out of a RAID array in the underlying disk subsystem. Specific to the SVC implementation, the address space that is mapped between the logical entity is referred to as volume, and the physical disk is referred to as managed disks (MDisks). Figure 1-2 on page 4 shows an overview of block-level virtualization.

Chapter 1. Introduction to storage virtualization

7933 01 Introduction Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 1-2 Block-level virtualization overview

The server and application are only aware of the logical entities, and access these entities using a consistent interface that is provided by the virtualization layer. The functionality of a volume that is presented to a server, such as expanding or reducing the size of a volume, mirroring a volume, creating a FlashCopy, thin provisioning, and so on, is implemented in the virtualization layer. It does not rely in any way on the functionality that is provided by the underlying disk subsystem. Data that is stored in a virtualized environment is stored in a location-independent way, which allows a user to move or migrate data between physical locations, referred to as storage pools. We refer to block-level storage virtualizations as the cornerstones of virtualization. These are the core benefits that a product such as the SVC can provide over traditional directly attached or SAN storage. The SVC provides the following benefits: The SVC provides online volume migration while applications are running, which is possibly the greatest single benefit for storage virtualization. This capability allows data to be migrated on and between the underlying storage subsystems without any impact to the servers and applications. In fact, this migration is performed without the knowledge by servers and applications that it even occurred. The SVC simplifies storage management by providing a single image for multiple controllers and a consistent user interface for provisioning heterogeneous storage. The SVC provides enterprise-level copy services functions. Performing the copy services functions within the SVC removes dependencies on the storage subsystems, thereby enabling the source and target copies to be on other storage subsystem types. Storage utilization can be increased by pooling storage across the SAN. System performance is often improved with SVC as a result of volume striping across multiple arrays or controllers and the additional cache it provides. The SVC delivers these functions in a homogeneous way on a scalable and highly available platform, over any attached storage, and to any attached server. 4
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 01 Introduction Massimo.fm

1.2 User requirements driving storage virtualization


In todays environment there is an emphasis on a smarter planet and dynamic infrastructure. Thus, there is a need for a storage environment that is as flexible as the application and server mobility. Business demands change quickly. These key client concerns drive storage virtualization: Growth in data center costs Inability of IT organizations to respond quickly to business demands Poor asset utilization Poor availability or service levels Lack of skilled staff for storage administration You can see the importance of addressing the complexity of managing storage networks by applying the total cost of ownership (TCO) metric to storage networks. Industry analyses show that storage acquisition costs are only about 20% of the TCO. Most of the remaining costs are related to managing the storage system. But how much of the management of multiple systems, with separate interfaces, can be handled as a single entity? In a non-virtualized storage environment, every system is an island that needs to be managed separately.

1.2.1 Benefits of using the SVC


The SVC can reduce the number of separate environments that need to managed down to a single environment. It provides a single interface for storage management. After the initial configuration of the storage subsystems, all of the day-to-day storage management operations are performed from the SVC. Because SVC provides advanced functions such as mirroring and FlashCopy, there is no need to purchase them again for each new disk subsystem. Today, it is typical that open systems run at significantly less than 50% of the usable capacity provided by the RAID disk subsystems. Using the installed raw capacity in the disk subsystems will, dependent on the RAID level that is used, show utilization numbers of less than 35%. A block-level virtualization solution, such as the SVC, can allow capacity utilization to increase to approximately 75 to 80%. With SVC, free space does not need to be maintained and managed within each storage subsystem, which further increases capacity utilization.

Chapter 1. Introduction to storage virtualization

7933 01 Introduction Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

1.3 What is new in SVC V6.3.0


IBM System Storage SAN Volume Controller (SVC) V6.3.0 has been designed to provide significant new capabilities to assist storage administrators in their responsibilities, thereby enabling them to manage even broader and more complex storage infrastructures and achieve maximum performance out of their storage. V6.3.0 delivers enhancements to SVC in the areas of Performance, Split I/O Group, Copy Services, LDAP, Interoperability and bring a new Tivoli Storage Productivity Center release. System performance improvements are available with improved management of the I/O paths to attached storage systems. This round-robin method allows more flexibility for data paths and provides greater performance reliability. SVC V6.3.0 introduces the ability to extend the distance between SVC nodes in a stretched cluster (split I/O group) configuration. While the extended distances depend on application latency restrictions, this function now enables enterprises to access and share a consistent view of data simultaneously across data centers, and to relocate data across disk array vendors and tiers, both inside and between data centers at full metro distances. Remote deployments for disaster recovery for current SAN Volume Controller environments can easily be incorporated with the Storwize V7000 or vice versa. Enhancements to Global Mirror with the SAN Volume Controller V6.3.0 are designed to provide new options to help administrators balance network bandwidth requirements and recovery point objectives for applications.SVC now supports higher recovery point objective (RPO) times providing the option to use a lower-bandwidth link between mirrored sites. This lower-bandwidth remote mirroring uses space-efficient, FlashCopy targets as sources in remote copy relationships to increase the time allowed to complete a remote copy data cycle. SVC management functions are now easier to integrate with existing user authentication methods in the customer's data center with native LDAP support. You can now connect to the clustered system via putty using the same user name with which you log into the SAN Volume Controller, and the use of the SSH key is optional. SVC interoperability now supports additional storage products including IBM XIV Gen3, HP 3PAR, and Violin Flash Memory Array, and supports additional models for Bull StoreWay, Fujitsu ETERNUS, and Texas Memory Systems RamSan. Interoperability is also available for VMware vSphere 5 and RedHat Enterprise Linux 6. IBM Tivoli Storage Productivity Center V4.2.2 is a feature-rich, storage management software suite. The integrated suite provides detailed monitoring, reporting, and management within a single console.

1.4 Summary
Storage virtualization is no longer merely a concept or an unproven technology. All major storage vendors offer storage virtualization products. Making use of storage virtualization as the foundation for a flexible and reliable storage solution helps enterprises to better align business and IT by optimizing the storage infrastructure and storage management to meet business demands. The IBM System Storage SAN Volume Controller is a mature, sixth-generation virtualization solution that uses open standards and is consistent with the Storage Networking Industry Association (SNIA) storage model. The SVC is an appliance-based in-band block 6
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 01 Introduction Massimo.fm

virtualization process, in which intelligence, including advanced storage functions, is migrated from individual storage devices to the storage network. The IBM System Storage SAN Volume Controller can improve the utilization of your storage resources, simplify your storage management, and improve the availability of your applications.

Chapter 1. Introduction to storage virtualization

7933 01 Introduction Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 02 SVC Overview - noSSD.fm

Chapter 2.

IBM System Storage SAN Volume Controller


In this chapter we explain the major concepts underlying the IBM System Storage SAN Volume Controller (SVC). We begin by presenting a brief history of the SVC product, and then provide you with an architectural overview. After defining SVC terminology, we describe software and hardware concepts and the additional functionalities that will be available with the newest release. Finally, we provide links to websites where you can find further information about SVC.

Copyright IBM Corp. 2011. All rights reserved.

7933 02 SVC Overview - noSSD.fm

Draft Document for Review January 17, 2012 6:10 am

2.1 Brief history of the SAN Volume Controller


The IBM implementation of block-level storage virtualization, the IBM System Storage SAN Volume Controller (SVC), is based on an IBM project that was initiated in the second half of 1999 at the IBM Almaden Research Center. The project was called COMmodity PArts Storage System, or COMPASS. One goal of this project was to create a system almost exclusively composed of off-the-shelf standard parts. As with any enterprise-level storage control system, it had to deliver a level of performance and availability comparable to the highly optimized storage controllers of previous generations. The idea of building a storage control system based on a scalable cluster of lower performance servers, instead of a monolithic architecture of two nodes, is still a compelling idea. COMPASS also had to address a major challenge for the heterogeneous open systems environment, namely to reduce the complexity of managing storage on block devices. The first documentation covering this project was released to the public in 2003 in the form of the IBM Systems Journal, Vol. 42, No. 2, 2003, The software architecture of a SAN storage control system, by J. S. Glider, C. F. Fuente, and W. J. Scales, which you can read at this website: http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5386853 The results of the COMPASS project defined the fundamentals for the product architecture. The announcement of the first release of the IBM System Storage SAN Volume Controller took place in July 2003. Each of the following releases brought new and more powerful hardware nodes, which approximately doubled the I/O performance and throughput of its predecessors, provided new functionality, and offered additional interoperability with new elements in host environments, disk subsystems, and the storage area network (SAN). The most recently released hardware node, the 2145-CG8, is based on IBM System x 3550 M3 server technology with an Intel Xeon 5500 2.53 GHz quad-core processor (Nehelam), 24 GB of cache, and four 8 Gbps Fibre Channel ports, two x 1Gbe ports. It is capable of supporting up to four internal Solid State Drives (SSDs). Currently, IBM has shipped over 21,500 SVC engines running in more than 6900 SVC systems worldwide. The IBM System Storage SVC V6.3 includes support for extended distance stretched clusters, round-robin data path selection for attached storage, native LDAP support, lower-bandwidth global mirroring, and mirroring between SVC and Storwize V7000.

2.2 SVC architectural overview


The IBM System Storage SAN Volume Controller is a SAN block aggregation virtualization appliance that is designed for attachment to a variety of host computer systems.

10

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 02 SVC Overview - noSSD.fm

There are two major approaches in use today to be considered for the implementation of block-level aggregation and virtualization: Symmetric: in-band appliance The device is a SAN appliance that sits in the data path, and all I/O flows through the device. This kind of implementation is also referred to as symmetric virtualization or in-band. The device is both target and initiator. It is the target of I/O requests from the host perspective, and the initiator of I/O requests from the storage perspective. The redirection is performed by issuing new I/O requests to the storage. The SVC uses symmetric virtualization. Asymmetric: out-of-band or controller-based The device is usually a storage controller that provides an internal switch for external storage attachment. In this approach, the storage controller intercepts and redirects I/O requests to the external storage as it does for internal storage. The actual I/O requests are themselves redirected. This kind of implementation is also referred to as asymmetric virtualization or out-of-band. Figure 2-1 shows variations of the two virtualization approaches.

Figure 2-1 Overview of block-level virtualization architectures

Although these approaches provide essentially the same cornerstones of virtualization, there can be interesting side effects, as discussed here.

Chapter 2. IBM System Storage SAN Volume Controller

11

7933 02 SVC Overview - noSSD.fm

Draft Document for Review January 17, 2012 6:10 am

The controller-based approach has high functionality, but it fails in terms of scalability or upgradability. Because of the nature of its design, there is no true decoupling with this approach, which becomes an issue for the life cycle of this solution, such as a controller. You will be challenged with data migration issues and questions, such as how to reconnect the servers to the new controller, and how to reconnect them online without any impact to your applications. Be aware that with this approach, you not only replace a controller but also implicitly replace your entire virtualization solution. In addition to replacing the hardware, can also be necessary to update or repurchase the licenses for the virtualization feature, advanced copy functions, and so on. With a SAN or fabric-based appliance solution that is based on a scale-out cluster architecture, life cycle management tasks such as adding or replacing new disk subsystems or migrating data between them, are extremely simple. Servers and applications remain online, data migration takes place transparently on the virtualization platform, and licenses for virtualization and copy services require no update; that is, they require no additional costs when disk subsystems are replaced. Only the fabric-based appliance solution provides an independent and scalable virtualization platform that can provide enterprise-class copy services; is open for future interfaces and protocols; allows you to choose the disk subsystems that best fit your requirements; and does not lock you into specific SAN hardware. For these reasons, IBM has chosen the SAN or fabric-based appliance approach for the implementation of the IBM System Storage SAN Volume Controller (SVC). The SVC possesses the following key characteristics: It is highly scalable, providing an easy growth path to two-n nodes (grow in a pair of nodes). It is SAN interface-independent. It actually supports FC and iSCSI, but is also open for future enhancements. It is host-independent, for fixed block-based Open Systems environments. It is external storage RAID controller-independent, providing a continual and ongoing process to qualify additional types of controllers. It is able to utilize disks internally located within the nodes (solid state disks). It is able to utilize disks locally attached to the nodes (SAS drives). On the SAN storage that is provided by the disk subsystems, the SVC can offer the following services. It can create and manage a single pool of storage attached to the SAN. It can manage multiple tiers of storage. It provides block-level virtualization (logical unit virtualization). It provides automatic block-, or sub-LUN-, level data migration between storage tiers. It provides advanced functions to the entire SAN, such as: Large scalable cache Advanced Copy Services FlashCopy (point-in-time copy) Metro Mirror and Global Mirror (remote copy, synchronous/asynchronous)

It provides nondisruptive and concurrent data migration. 12


IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 02 SVC Overview - noSSD.fm

This list of features will grow with each future release, because the layered architecture of the SVC can easily implement new storage features.

2.2.1 SAN Volume Controller topology


SAN-based storage is managed by the SVC in one or more pairs of SVC hardware nodes, referred to as a clustered system or system. These nodes are attached to the SAN fabric, along with RAID controllers and host systems. The SAN fabric is zoned to allow the SVC to see the RAID controllers, and for the hosts to see the SVC. The hosts are not allowed to see or operate on the same physical storage (LUN) from the RAID controller that has been assigned to the SVC. Storage controllers can be shared between the SVC and direct host access as long as the same LUNs are not shared. The zoning capabilities of the SAN switch must be used to create distinct zones to ensure this rule is enforced. SAN fabrics may include standard FC, iSCSI over Ethernet, or possible future types such as FC over Ethernet. Figure 2-2 on page 14 shows a conceptual diagram of a storage system utilizing the SVC. It shows a number of hosts that are connected to a SAN fabric or LAN. In practical implementations that have high availability requirements (the majority of the target clients for SVC), the SAN fabric cloud represents a redundant SAN. A redundant SAN consists of a fault-tolerant arrangement of two or more counterpart SANs, thereby providing alternate paths for each SAN-attached device. Both scenarios (using a single network and using two physically separate networks) are supported for iSCSI-based and LAN-based access networks to the SVC. Redundant paths to volumes can be provided in both scenarios. For simplicity, Figure 2-2 on page 14 shows only one SAN fabric and two zones, namely host and storage. In a real environment, it is a best practice to use two redundant SAN fabrics. The SVC can be connected to up to four fabrics. Zoning details are described in 3.3.2, SAN zoning and SAN connections on page 75.

Chapter 2. IBM System Storage SAN Volume Controller

13

7933 02 SVC Overview - noSSD.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 2-2 SVC conceptual and topology overview

A clustered system of SVC nodes are connected to the same fabric and present logical disks or volumes to the hosts. These volumes are created from managed LUNs or MDisks that are presented by the RAID disk subsystems. There are two distinct zones shown in the fabric: A host zone, in which the hosts can see and address the SVC nodes A storage zone, in which the SVC nodes can see and address the MDisk/logical unit numbers (LUNs) presented by the RAID subsystems. Hosts are not permitted to operate on the RAID LUNs directly, and all data transfer happens through the SVC nodes. This design is commonly described as symmetric virtualization. For iSCSI-based access, using two networks and separating iSCSI traffic within the networks by using a dedicated virtual local area network (VLAN) path for storage traffic will prevent any IP interface, switch, or target port failure from compromising the host servers access to the volumes LUNs.

2.3 SVC terminology


To provide a higher level of consistency between IBM storage products, the terminology used starting with SVC V6, and therefore throughout the rest of this book, has changed when compared to previous SVC releases. Table 2-1 on page 14 summarizes the main changes.
Table 2-1 SVC terminology mapping SAN Volume Controller terminology clustered system or system Previous SAN Volume Controller term cluster Description A clustered system consists of between one to four I/O Groups.

14

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 02 SVC Overview - noSSD.fm

SAN Volume Controller terminology event

Previous SAN Volume Controller term error

Description An occurrence of significance to a task or system. Events can include completion or failure of an operation, a user action, or the change in state of a process. The process of controlling which hosts have access to specific volumes within a system. A collection of storage capacity that provides the capacity requirements for a volume. The ability to define a storage unit (full system, storage pool, volume) with a logical capacity size that is larger than the physical capacity assigned to that storage unit. A discrete unit of storage on disk, tape, or other data recording medium that supports a form of identifier and parameter list, such as a volume label or input/output control.

host mapping

VDisk-to-host mapping

storage pool

managed disk (MDisk) group

thin provisioning (or thin-provisioned)

space-efficient

volume

virtual disk (VDisk)

For a detailed glossary containing the terms and definitions used in the SAN Volume Controller see Appendix B, Terminology on page 891.

2.4 SAN Volume Controller components


The SVC product provides block-level aggregation and volume management for attached disk storage. In simpler terms, the SVC manages a number of back-end storage controllers or locally attached disks and maps the physical storage within those controllers or disk arrays into logical disk images, or volumes, that can be seen by application servers and workstations in the SAN. The SAN is zoned so that the application servers cannot see the back-end physical storage, which prevents any possible conflict between the SVC and the application servers both trying to manage the back-end storage. The SVC is based on the following components, which are discussed in more detail in later sections of this chapter.

2.4.1 Nodes
Each SAN volume Controller hardware unit is called a node. The node provides the virtualization for a set of volumes, cache, and copy services functions. SVC nodes are deployed in pairs and multiple pairs make up a clustered system or system. A system can consist of between one and four SVC node pairs.

Chapter 2. IBM System Storage SAN Volume Controller

15

7933 02 SVC Overview - noSSD.fm

Draft Document for Review January 17, 2012 6:10 am

One of the nodes within the system will be known as the configuration node. The configuration node manages the configuration activity for the system. If this node fails, the system will choose a new node to become the configuration node. Because the nodes are installed in pairs, each node provides a failover function to its partner node in the event of a node failure.

2.4.2 I/O Groups


Each pair of SVC nodes is also referred to as an I/O Group. An SVC clustered system can have from one to four I/O Groups. A specific volume is always presented to a host server by a single I/O Group of the system. When a host server performs I/O to one of its volumes, all the I/Os for a specific volume are directed to one specific I/O Group in the system. Also, under normal conditions, the I/Os for that specific volume are always processed by the same node within the I/O Group. This node is referred to as the preferred node for this specific volume. Both nodes of an I/O Group act as the preferred node for their own specific subset of the total number of volumes that the I/O Group presents to the host servers. A maximum of 2048 volumes per I/O group is allowed. However, both nodes also act as failover nodes for their respective partner node within the I/O Group. Therefore, a node will take over the I/O workload from its partner node, if required. Thus, in an SVC-based environment, the I/O handling for a volume can switch between the two nodes of the I/O Group. For this reason it is mandatory for servers that are connected through FC to use multipath drivers to be able to handle these failover situations. The SVC I/O Groups are connected to the SAN so that all application servers accessing volumes from this I/O Group have access to this group. Up to 256 host server objects can be defined per I/O Group. The host server objects can access volumes that are provided by this specific I/O Group. If required, host servers can be mapped to more than one I/O Group within the SVC system; therefore, they can access volumes from separate I/O Groups. You can move volumes between I/O Groups to redistribute the load between the I/O Groups; however, moving volumes between I/O Groups cannot be done concurrently with host I/O and will require a brief interruption to remap the host.

2.4.3 System
The system or clustered system consists of between one and four I/O Groups. Certain configuration limitations are then set for the individual system. For example, the maximum number of volumes supported per system is 8192 (having a maximum of 2048 volumes per I/O Group), or the maximum managed disk supported is 32 PB per system. All configuration, monitoring, and service tasks are performed at the system level. Configuration settings are replicated to all nodes in the system. To facilitate these tasks, a management IP address is set for the system. A process is provided to back up the system configuration data onto disk so that it can be restored in the event of a disaster. Note that this method does not back up application data. Only SVC system configuration information is backed up. For the purposes of remote data mirroring, two or more systems must form a partnership prior to creating relationships between mirrored volumes.

16

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 02 SVC Overview - noSSD.fm

For details about the Maximum Configurations applicable to the System, I/O Group and nodes, select the restrictions hot link in the section corresponding to your SVC code level: http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html

2.4.4 Split cluster


Normally a pair of nodes from the same I/O group are physically located within the same rack, in the same computer room. Starting with SVC 6.3, to provide protection against failures that affect an entire location (for example, a power failure), you can split a single system between two physical locations. Appendix C, SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines on page 899 has more information.

2.4.5 MDisks
The SVC system and its I/O Groups view the storage that is presented to the SAN by the back-end controllers as a number of disks or LUNs, known as managed disks or MDisks. Because the SVC does not attempt to provide recovery from physical disk failures within the back-end controllers, an MDisk is usually provisioned from a RAID array. The application servers, however, do not see the MDisks at all. Instead they see a number of logical disks, known as virtual disks or volumes, which are presented by the SVC I/O Groups through the SAN (FC) or LAN (iSCSI) to the servers. The MDisks are placed into storage pools where they are divided up into a number of extents, which can range in size from 16 MB to 8182 MB, as defined by the SVC administrator. A volume is host-accessible storage that has been provisioned out of one Storage Pool, or if it is a mirrored volume, out of two Storage Pools. The maximum size of an MDisk is 1 PB. An SVC system supports up to 4096 MDisks (including internal RAID arrays). At any point in time, an MDisk is in one of the following three modes: Unmanaged MDisk An MDisk is reported as unmanaged when it is not a member of any storage pool. An unmanaged MDisk is not associated with any volumes and has no metadata stored on it. SVC will not write to an MDisk that is in unmanaged mode, except when it attempts to change the mode of the MDisk to one of the other modes. SVC can see the resource, but it is not assigned to a storage pool. Managed MDisk Managed mode MDisks are always members of a storage pool, and they contribute extents to the storage pool. Volumes (if not operated in image mode) are created from these extents. MDisks operating in managed mode might have metadata extents allocated from them and can be used as quorum disks. This is the most common and normal mode of an MDisk.

Chapter 2. IBM System Storage SAN Volume Controller

17

7933 02 SVC Overview - noSSD.fm

Draft Document for Review January 17, 2012 6:10 am

Image mode MDisk Image mode provides a direct block-for-block translation from the MDisk to the volume by using virtualization. This mode is provided to satisfy three major usage scenarios: Image mode allows virtualization of MDisks already containing data that was written directly and not through an SVC; rather, it was created by a direct-connected host. This mode allows a client to insert the SVC into the data path of an existing storage volume or LUN with minimal downtime. Chapter 6, Data migration on page 227, provides details of the data migration process. Image mode allows a volume that is managed by the SVC to be used with the native copy services function provided by the underlying RAID controller. To avoid the loss of data integrity when the SVC is used in this way, it is important that you disable the SVC cache for the volume. SVC provides the ability to migrate to image mode, which allows the SVC to export volumes and access them directly from a host without the SVC in the path. Each MDisk presented from an external disk controller has an online path count that is the number of nodes having access to that MDisk. The maximum count is the maximum paths detected at any point in time by the system. The current count is what the system sees at this point in time. A current value less than the maximum can indicate that SAN fabric paths have been lost. See 2.5.1, Image mode volumes on page 23 for more details. Starting with SVC 6.1, internal SSD drives do not appear as MDisks. Internal SSDs will be used and appear as disk drives, and therefore additional RAID protection is required.

2.4.6 Quorum disk


A quorum disk is a managed disk (MDisk) that contains a reserved area for use exclusively by the system. The system uses quorum disks to break a tie when exactly half the nodes in the system remain after a SAN failure; this is referred to as split brain. There are three candidate quorum disks. However, only one quorum disk is active at any time. Quorum disks are discussed in more detail in 2.8.1, Quorum disks on page 39.

2.4.7 Disk tier


It is likely that the MDisks (LUNs) presented to the SVC system will have various performance attributes due to the type of disk or RAID array that they reside on. The MDisks may be on 15K RPM Fibre Channel or SAS disk, Nearline SAS or SATA, or even solid state disk (SSDs). Therefore, a storage tier attribute is assigned to each MDisk, with the default being generic_hdd. Starting with SVC V6.1 a new tier 0 (zero) level disk attribute is available for SSDs and it is known as generic_ssd.

2.4.8 Storage pool


A storage pool is a collection of up to 128 MDisks that provides the pool of storage from which volumes are provisioned. A single system can manage up to 128 storage pools. The size of these pools can be changed (expanded or shrunk) at run time by adding or removing MDisks, without taking the storage pool or the volumes offline. At any point in time, an MDisk can only be a member in one storage pool, with the exception of image mode volumes; see 2.5.1, Image mode volumes on page 23 for more information about this topic.

18

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 02 SVC Overview - noSSD.fm

Figure 2-3 on page 19 illustrates the relationships of the SVC entities to each other.

Figure 2-3 Overview of SVC clustered system with I/O Group

Each MDisk in the storage pool is divided into a number of extents. The size of the extent will be selected by the administrator at the creation time of the storage pool and cannot be changed later. The size of the extent ranges from 16MB up to 8192MB. It is a best practice to use the same extent size for all storage pools in a system; this is a pre-requisite for supporting volume migration between two storage pools. If the storage pool extent sizes are not the same, then you must use volume mirroring (see 2.5.4, Mirrored volumes on page 26) to copy volumes between pools. SVC limits the number of extents in a system to 222 ~= 4 million. Because the number of addressable extents is limited, the total capacity of an SVC system depends on the extent size that is chosen by the SVC administrator. The capacity numbers that are specified in Table 2-2 for an SVC system assume that all defined storage pools have been created with the same extent size.
Table 2-2 Extent size-to-addressability matrix Extent size maximum 16 MB 32 MB 64 MB 128 MB 4096 MB System capacity 64 TB 128 TB 256 TB 512 TB 16 PB Extent size maximum 256 MB 512 MB 1024 MB 2048 MB 8192 MB System capacity 1 PB 2 PB 4 PB 8 PB 32 PB

Chapter 2. IBM System Storage SAN Volume Controller

19

7933 02 SVC Overview - noSSD.fm

Draft Document for Review January 17, 2012 6:10 am

For most systems, a capacity of 1 to 2 PB is sufficient. A best practice is to use 256 MB or, for larger clustered systems, 512 MB as the standard extent size.

Single-tiered storage pool


MDisks used in a single-tiered storage pool should have the following characteristics to avoid inducing performance problems and other issues: They have the same hardware characteristics, for example, the same RAID type, RAID array size, disk type, and disk revolutions per minute (RPMs). The disk subsystems providing the MDisks must have similar characteristics, for example, maximum input/output operations per second (IOPS), response time, cache, and throughput. The MDisks used are of the same size and are therefore MDisks that provide the same number of extents. If that is not feasible, you will need to check the distribution of the volumes extents in that storage pool. For further details, see SAN Volume Controller Best Practices and Performance Guidelines, SG24-7521, which is available at this website: http://www.redbooks.ibm.com/abstracts/sg247521.html?Open

Multitiered storage pool


A multitiered storage pool will have a mix of MDisks with more than one type of disk tier attribute. For example, a storage pool containing a mix of generic_hdd AND generic_ssd MDisks. A multitiered storage pool will therefore contain MDisks with various characteristics, as opposed to a single-tier storage pool. However, it is a best practice for each tier to have MDisks of the same size and MDisks that provide the same number of extents. Multi-tiered storage pools are used to enable the automatic migration of extents between disk tiers using the SVC Easy Tier function. These storage pools are described in more detail in Chapter 7, Easy Tier on page 349.

2.4.9 Volumes
Volumes are logical disks presented to the host or application servers by the SVC. The hosts cannot see the MDisks; they can only see the logical volumes created from combining extents from a storage pool.
There are three types of volumes: striped, sequential, and image. These types are determined by the way in which the extents are allocated from the storage pool, as explained here: A volume created in striped mode has extents allocated from each MDisk in the storage pool in a round-robin fashion. With a sequential mode volume, extents are allocated sequentially from an MDisk. Image mode is a one-to-one mapped extent mode volume. Using striped mode is the best method to use for most cases. However, sequential extent allocation mode can slightly increase the sequential performance for certain workloads. Figure 2-4 on page 21 shows striped volume mode and sequential volume mode, and illustrates how the extent allocation from the storage pool differs.

20

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 02 SVC Overview - noSSD.fm

Figure 2-4 Storage Pool extents overview

You can allocate the extents for a volume in many ways. The process is under full user control at volume creation time and can be changed at any time by migrating single extents of a volume to another MDisk within the storage pool. Chapter 6, Data migration on page 227, Chapter 9, SAN Volume Controller operations using the command-line interface on page 467, and Chapter 10, SAN Volume Controller operations using the GUI on page 631 provide detailed explanations about how to create volumes and migrate extents by using the GUI or CLI.

2.4.10 Easy Tier performance function


Easy Tier is a performance function that will automatically migrate or move extents off a volume to, or from, one MDisk storage tier to another MDisk storage tier. Easy Tier monitors the host I/O activity and latency on the extents of all volumes with the Easy Tier function turned on in a multitier storage pool over a 24-hour period. Next, it creates an extent migration plan based on this activity and then dynamically moves high activity or hot extents to a higher disk tier within the storage pool. It will also move extents whose activity has dropped off or cooled from the high-tier MDisks back to a lower-tiered MDisk. Note: The Easy Tier function may be turned on or off at the storage pool level and volume level. To experience the potential benefits of using Easy Tier in your environment before actually installing expensive solid-state disks (SSDs), you can turn on the Easy Tier function for a single level storage pool. Next, turn on the Easy Tier function for the volumes within that pool. Easy Tier will then start monitoring activity on the volume extents in the pool.

Chapter 2. IBM System Storage SAN Volume Controller

21

7933 02 SVC Overview - noSSD.fm

Draft Document for Review January 17, 2012 6:10 am

Easy Tier will create a report every 24 hours providing information on how Easy Tier would behave if the tier was a multitiered storage pool. So even though Easy Tier extent migration is not possible within a single tier pool, the Easy Tier statistical measurement function is available. The Easy Tier function can make it more appropriate to use smaller storage pool extent sizes. The usage statistics file can be offloaded from the SVC nodes. Then you can use an IBM Storage Advisor Tool to create a summary report. For more detailed information about Easy Tier functionality and more information about statistics generation using IBMs Storage Advisor Tool, see Chapter 7, Easy Tier on page 349.

2.4.11 Hosts
Volumes can be mapped to a host to allow access for a specific server to a set of volumes. A host within the SVC is a collection of HBA worldwide port names (WWPNs) or iSCSI qualified names (IQNs), defined on the specific server. Note that iSCSI names are internally identified by fake WWPNs, or WWPNs that are generated by the SVC. Volumes can be mapped to multiple hosts, for example, a volume that is accessed by multiple hosts of a server system. iSCSI is an alternative means of attaching hosts. However, all communication with back-end storage subsystems, and with other SVC systems, is still through FC. Node failover can be handled without having a multipath driver installed on the iSCSI server. An iSCSI-attached server can simply reconnect after a node failover to the original target IP address, which is now presented by the partner node. To protect the server against link failures in the network or host bus adapter (HBA) failures, using a multipath driver is mandatory. Volumes are LUN masked to the hosts HBA WWPNs by a process called host mapping. Mapping a volume to the host makes it accessible to the WWPNs or iSCSI names (IQNs) that are configured on the host object. For a SCSI over Ethernet connection, the IQN identifies the iSCSI target (destination) adapter. Host objects can have both IQNs and WWPNs.

2.4.12 Maximum supported configurations


For details about the maximum configurations applicable to the system, I/O Group and nodes, select the restrictions hot link in the section of the SVC support site that corresponds to your SVC code level: http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html There are some configuration limits in the SVC. The following list includes several of the more important ones. For the most current details, consult the SVC support site. 16 WWNNs per storage subsystem 1 PB MDisk 8192 MB Extents Long Object Names - up to 63 characters See 2.13, What is new with SVC 6.3 on page 63 for a more detailed explanation of the new features.

22

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 02 SVC Overview - noSSD.fm

2.5 Volume overview


The maximum size of a single volume is 256 TB. A single SVC system supports up to 8192 volumes. Volumes have the following characteristics or attributes: Volumes can be created and deleted. Volumes can be resized (expand or shrink). Volume extents can be migrated at run time to another MDisk or storage pool. Volumes can be created as fully allocated or thin-provisioned. A conversion from a fully allocated to a thin-provisioned volume and vice versa can be done at run time. Volumes can be stored in multiple storage pools (mirrored) to make them resistant to disk subsystem failures or to improve the read performance. Volumes can be mirrored synchronously or asynchronously for longer distances. An SVC system can run active volume mirrors to a maximum of three other SVC systems, but not from the same volume. Volumes can be copied using FlashCopy. Multiple snapshots and quick restore from snapshots (reverse flash copy) are supported. Volumes have two major modes: image mode and managed mode. Managed mode volumes have two policies: the sequential policy and the striped policy. Policies define how the extents of a volume are allocated from a storage pool.

2.5.1 Image mode volumes


Image mode volumes are used to migrate LUNs that were previously mapped directly to host servers over to the control of SVC. Image mode provides a one-to-one mapping between the logical block addresses (LBAs) between a volume and an MDisk. Image mode volumes have a minimum size of one block (512 bytes) and always occupy at least one extent. An image mode MDisk is mapped to one and only one image mode volume. The volume capacity that is specified must be equal to the size of the image mode MDisk. When you create an image mode volume, the specified MDisk must be in unmanaged mode and must not be a member of a storage pool. The MDisk is made a member of the specified storage pool (Storage Pool_IMG_xxx) as a result of the creation of the image mode volume. The SVC also supports the reverse process in which a managed mode volume can be migrated to image mode volumes. If a volume is migrated to another MDisk, it is represented as being in managed mode during the migration and is only represented as an image mode volume after it has reached the state where it is a straight-through mapping. An image mode MDisk is associated with exactly one volume. The last extent is partial, not filled, if the (image mode) MDisk is not a multiple of the MDisk Groups extent size. An image mode volume is a pass-through one-to-one map of its MDisk. It cannot be a quorum disk and will not have any SVC metadata extents assigned to it. Managed or image mode MDisks are always members of a storage pool. It is a best practice to put image mode MDisks in a dedicated storage pool and use a special name for it (for example, Storage Pool_IMG_xxx). Remember that the extent size chosen for

Chapter 2. IBM System Storage SAN Volume Controller

23

7933 02 SVC Overview - noSSD.fm

Draft Document for Review January 17, 2012 6:10 am

this specific storage pool must be the same as the extent size in which you plan to migrate the data to. All of the SVC copy services functions can be applied to image mode disks.

Figure 2-5 Image mode volume versus striped volume

2.5.2 Managed mode volumes


Volumes operating in managed mode provide a full set of virtualization functions. Within a storage pool, SVC supports an arbitrary relationship between extents on (managed mode) volumes and extents on MDisks. Each volume extent maps to exactly one MDisk extent. Figure 2-6 on page 25 represents this diagrammatically. It shows a volume that is made up of a number of extents shown as V0 to V7. Each of these extents is mapped to an extent on one of the MDisks: A, B, or C. The mapping table stores the details of this indirection. Notice that several of the MDisk extents are unused. There is no volume extent that maps to them. These unused extents are available for use in creating new volumes, migration, expansion, and so on.

24

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 02 SVC Overview - noSSD.fm

Figure 2-6 Simple view of block virtualization

The allocation of a specific number of extents from a specific set of MDisks is performed by the following algorithm: if the set of MDisks from which to allocate extents contains more than one MDisk, extents are allocated from MDisks in a round-robin fashion. If an MDisk has no free extents when its turn arrives, its turn is missed and the round-robin moves to the next MDisk in the set that has a free extent. When creating a new volume, the first MDisk from which to allocate an extent is chosen in a pseudo-random way rather than simply choosing the next disk in a round-robin fashion. The pseudo-random algorithm avoids the situation whereby the striping effect inherent in a round-robin algorithm places the first extent for a large number of volumes on the same MDisk. Placing the first extent of a number of volumes on the same MDisk can lead to poor performance for workloads that place a large I/O load on the first extent of each volume, or that create multiple sequential streams.

2.5.3 Cache mode and cache-disabled volumes


Under nominal conditions, a volumes read and write data is held in the cache of its preferred node, with a mirrored copy of write data held in the partner node of the same I/O group. However, it is possible to create a volume with cache disabled, which means the I/Os are passed directly through to the back-end storage controller rather than being held in the nodes cache. Having cache-disabled volumes makes it possible to use the native copy services in the underlying RAID array controller for MDisks (LUNs) that are used as SVC image mode volumes. Using SVC copy services rather than the underlying disk controller copy services gives better results.

Chapter 2. IBM System Storage SAN Volume Controller

25

7933 02 SVC Overview - noSSD.fm

Draft Document for Review January 17, 2012 6:10 am

2.5.4 Mirrored volumes


The mirrored volume feature provides a simple RAID-1 function; thus, a volume will have two physical copies of its data. This allows the volume to remain online and accessible even if one of the MDisks sustains a failure that causes it to become inaccessible. The two copies of the volume are typically allocated from separate storage pools or by using image-mode copies. The volume can participate in FlashCopy and a Remote Copy relationships, it is serviced by an I/O Group, and it has a preferred node. Each copy is not a separate object and cannot be created or manipulated except in the context of the volume. Copies are identified through the configuration interface with a copy ID of their parent volume. This copy ID can be either 0 or 1. The feature provides a point-in-time copy functionality that is achieved by splitting a copy from the volume. Note, however, that the mirrored volume feature does not address other forms of mirroring based on Remote Copy (sometimes called Hyperswap), which mirrors volumes across I/O Groups or clustered systems. It is also not intended to manage mirroring or remote copy functions in back-end controllers. Figure 2-7 provides an overview of volume mirroring.

Figure 2-7 Volume mirroring overview

A second copy can be added to a volume with a single copy, or removed from a volume with two copies. Checks prevent the accidental removal of the only remaining copy of a volume. A newly created, unformatted volume with two copies will initially have the two copies in an out-of-synchronization state. The primary copy will be defined as fresh and the secondary copy as stale. The synchronization process will update the secondary copy until it is fully synchronized. This is done at the default synchronization rate or at a rate defined when creating the volume or modifying it. The synchronization status for mirrored volumes is recorded on the quorum disk.

26

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 02 SVC Overview - noSSD.fm

If a two-copy mirrored volume is created with the format parameter, then both copies are formatted in parallel and the volume comes online when both operations are complete with the copies in sync. If mirrored volumes are expanded or shrunk, all of their copies are also expanded or shrunk. If it is known that MDisk space, which will be used for creating copies, is already formatted, or if the user does not require read stability, then a no synchronization option can be selected which declares the copies as synchronized (even when they are not). To minimize the time required to resynchronize a copy that has become out of sync, only the 256 KB grains that have been written to since the synchronization was lost, are copied. This approach is known as an incremental synchronization. Only the changed grains need to be copied to restore synchronization. Important: An unmirrored volume can be migrated from one location to another by simply adding a second copy to the desired destination, waiting for the two copies to synchronize, and then removing the original copy 0. This operation can be stopped at any time. The two copies can be in separate storage pools with separate extent sizes. Where there are two copies of a volume, one copy is known as the primary copy. If the primary is available and synchronized, reads from the volume are directed to it. The user can select the primary when creating the volume, or can change it later. Placing the primary copy on a high-performance controller will maximize the read performance of the volume. The write performance will be constrained if one copy is on a lower-performance controller. This is because writes must complete to both copies before the volume can provide acknowledgment to the host that the write completed successfully. Remember that writes to both copies must complete to be considered successfully written even if volume mirroring has one copy in a solid-state drive storage pool and the second copy in a storage pool containing resources from a disk subsystem. A volume with copies can be checked to see whether all of the copies are identical or consistent. If a medium error is encountered while reading from one copy, it will be repaired using data from the other copy. This consistency check is performed asynchronously with host I/O. Important: Mirrored volumes can be taken offline if there is no quorum disk available. This behavior occurs because the synchronization status for mirrored volumes is recorded on the quorum disk. Mirrored volumes consume bitmap space at a rate of 1 bit per 256 KB grain, which translates to 1 MB of bitmap space supporting 2 TB-worth of mirrored volume. The default allocation of bitmap space in 20 MB, which supports 40 TB of mirrored volumes. If all 512 MB of variable bitmap space is allocated to mirrored volumes, 1 PB of mirrored volumes can be supported.

2.5.5 Thin-provisioned volumes


Volumes can be configured to be either thin-provisioned or fully allocated. A thin-provisioned volume will behave with respect to application reads and writes as though they were fully allocated. When creating a thin-provisioned volume, the user will specify two capacities: the real physical capacity allocated to the volume from the storage pool, and its

Chapter 2. IBM System Storage SAN Volume Controller

27

7933 02 SVC Overview - noSSD.fm

Draft Document for Review January 17, 2012 6:10 am

virtual capacity available to the host. In a fully allocated volume, these two values will be the same. Thus, the real capacity will determine the quantity of MDisk extents that will be initially allocated to the volume. The virtual capacity will be the capacity of the volume reported to all other SVC components (for example, FlashCopy, Cache, and Remote Copy) and to the host servers. The real capacity is used to store both the user data and the metadata for the thin-provisioned volume. The real capacity can be specified as an absolute value or a percentage of the virtual capacity. Thin-provisioned volumes can be used as volumes assigned to the host; by FlashCopy to implement thin-provisioned FlashCopy targets; and also with the mirrored volumes feature. When a thin-provisioned volume is initially created, a small amount of the real capacity will be used for initial metadata. Write I/Os to grains of the thin volume that have not previously been written to will cause grains of the real capacity to be used to store metadata and the actual user data. Write I/Os to grains that have previously been written to will update the grain where data was previously written. The grain size is defined when the volume is created and can be 32 KB, 64 KB, 128 KB, or 256 KB. Figure 2-8 illustrates the thin-provisioning concept.

Figure 2-8 Conceptual diagram of thin-provisioned volume

Thin-provisioned volumes store both user data and metadata. Each grain of data requires metadata to be stored. This means the I/O rates that are obtained from thin-provisioned volumes will be less than fully allocated volumes. The metadata storage overhead will never be greater than 0.1% of the user data. The overhead is independent of the virtual capacity of the volume. If you are using thin-provisioned volumes in a FlashCopy map, then for best performance use the same grain

28

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 02 SVC Overview - noSSD.fm

size as the map grain size. If you are using the thin-provisioned volume directly with a host system, then use a small grain size. Thin-provisioned volume format: Thin-provisioned volumes do not need formatting. A read I/O, which requests data from unallocated data space, will return zeroes. When a write I/O causes space to be allocated, the grain will be zeroed prior to use. However, if the node is a CF8, space is not allocated for a host write that contains all zeros. The formatting flag will be ignored when a thin volume is created or when the real capacity is expanded; the virtualization component will never format the real capacity of a thin-provisioned volume. The real capacity of a thin volume can be changed if the volume is not in image mode. Increasing the real capacity allows a larger amount of data and metadata to be stored on the volume. Thin-provisioned volumes use the real capacity provided in ascending order as new data is written to the volume. If the user initially assigns too much real capacity to the volume, the real capacity can be reduced to free storage for other uses. A thin-provisioned volume can be configured to autoexpand. This feature causes the SVC to automatically add a fixed amount of additional real capacity to the thin volume as required. Autoexpand therefore attempts to maintain a fixed amount of unused real capacity for the volume. This amount is known as the contingency capacity. The contingency capacity is initially set to the real capacity that is assigned when the volume is created. If the user modifies the real capacity, the contingency capacity is reset to be the difference between the used capacity and real capacity. A volume that is created without the autoexpand feature, and thus has a zero contingency capacity, will go offline as soon as the real capacity is used and needs to expand. Autoexpand will not cause the real capacity to grow much beyond the virtual capacity. The real capacity can be manually expanded to more than the maximum that is required by the current virtual capacity, and the contingency capacity will be recalculated. To support the autoexpansion of thin-provisioned volumes, the storage pools from which they are allocated have a configurable capacity warning. When the used capacity of the pool exceeds the warning capacity, a warning event is logged. For example, if a warning of 80% has been specified, the event will be logged when 20% of the free capacity remains. A thin-provisioned volume can be converted nondisruptively to a fully allocated volume, or vice versa, by using the volume mirroring function. For example, you can add a thin-provisioned copy to a fully allocated primary volume and then remove the fully allocated copy from the volume after they are synchronized. The fully allocated to thin-provisioned migration procedure uses a zero-detection algorithm so that grains containing all zeros do not cause any real capacity to be used.

2.5.6 Volume I/O governing


It is possible to constrain I/O operations so that the maximum amount of I/O activity a host can perform on a volume can be limited over a specific period of time. This governing feature can be used to satisfy a Quality Of Service requirement or a contractual obligation (for example, if a client agrees to pay for I/Os performed, but will not pay for I/Os beyond a certain

Chapter 2. IBM System Storage SAN Volume Controller

29

7933 02 SVC Overview - noSSD.fm

Draft Document for Review January 17, 2012 6:10 am

rate). Only Read, Write and Verify commands that access the physical medium are subject to I/O governing. The governing rate can be set in I/Os per second or MB per second. It can be altered by changing the throttle value through the svcinfo chvdisk command and specifying the -rate parameter. I/O governing: I/O governing on Metro Mirror or Global Mirror secondary volumes does not affect the data copy rate from the primary. Governing has no effect on FlashCopy or data migration I/O rates. An I/O budget is expressed as a number of I/Os, or MBs, over a minute. The budget is evenly divided between all SVC nodes that service that volume, that is, between the nodes that form the I/O Group of which that volume is a member. The algorithm operates two levels of policing. While a volume on each SVC node receives I/O at a rate lower than the governed level, no governing is performed. However, when the I/O rate exceeds the defined threshold, then adjustments to the policy are made. A check is made every minute to see that each node is continuing to receive I/O below the threshold level. Whenever this check shows that the host has exceeded its limit on one or more nodes, then policing begins for new I/Os. The following conditions exist while policing is in force: A budget allowance is calculated for a one- second period. I/Os are counted over a period of a second. If I/Os are received in excess of the one-second budget on any node in the I/O Group, those I/Os and later I/Os are pended. When the second expires, a new budget is established, and any pended I/Os are redriven under the new budget. This algorithm might cause I/O to backlog in the front-end, which might eventually cause a Queue Full Condition to be reported to hosts that continue to flood the system with I/O. If a host stays within its one-second budget on all nodes in the I/O Group for a period of one minute, then the policing is relaxed and monitoring takes place over the one-minute period as before.

2.6 iSCSI overview


iSCSI is an alternative means of attaching hosts to the SVC. All communications with back-end storage subsystems and with other SVC systems only occur through FC. The iSCSI function is a software function that is provided by the SVC code, not hardware. In the simplest terms, iSCSI allows the transport of SCSI commands and data over a TCP/IP network, based on IP routers and Ethernet switches. iSCSI is a block-level protocol that encapsulates SCSI commands into TCP/IP packets and thereby leverages an existing IP network, instead of requiring expensive FC HBAs and a SAN fabric infrastructure. A pure SCSI architecture is based on the client/server model. A client (for example, server or workstation) initiates read or write requests for data from a target server (for example, a data storage system). Commands, which are sent by the client and processed by the server, are put into the Command Descriptor Block (CDB). The server executes a command, and completion is indicated by a special signal alert.

30

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 02 SVC Overview - noSSD.fm

The major functions of iSCSI include encapsulation and the reliable delivery of CDB transactions between initiators and targets through the TCP/IP network, especially over a potentially unreliable IP network. The concepts of names and addresses have been carefully separated in iSCSI: An iSCSI name is a location-independent, permanent identifier for an iSCSI node. An iSCSI node has one iSCSI name, which stays constant for the life of the node. The terms initiator name and target name also refer to an iSCSI name. An iSCSI address specifies not only the iSCSI name of an iSCSI node, but also a location of that node. The address consists of a host name or IP address, a TCP port number (for the target), and the iSCSI name of the node. An iSCSI node can have any number of addresses, which can change at any time, particularly if they are assigned by way of Dynamic Host Configuration Protocol (DHCP). An SVC node represents an iSCSI node and provides statically allocated IP addresses. Each iSCSI node, that is, an initiator or target, has a unique iSCSI Qualified Name (IQN), which can have a size of up to 255 bytes. The IQN is formed according to the rules adopted for Internet nodes. The iSCSI qualified name format is defined in RFC3720 and contains (in order) these elements: The string iqn. A date code specifying the year and month in which the organization registered the domain or sub-domain name used as the naming authority string. The organizational naming authority string, which consists of a valid, reversed domain or a subdomain name. Optionally, a colon (:), followed by a string of the assigning organizations choosing, which must make each assigned iSCSI name unique. For SVC, the IQN for its iSCSI target is specified as: iqn.1986-03.com.ibm:2145.<clustername>.<nodename> On a Windows server, the IQN, that is, the name for the iSCSI Initiator, can be defined as: iqn.1991-05.com.microsoft:<computer name> The IQNs can be abbreviated used a descriptive name, known as an alias. An alias can be assigned to an initiator or a target. The alias is independent of the name and does not have to be unique. Because it is not unique, the alias must be used in a purely informational way. It cannot be used to specify a target at login or used during authentication. Both targets and initiators can have aliases. An iSCSI name provides the correct identification of an iSCSI device irrespective of its physical location. Remember, the IQN is an identifier, not an address. Be careful: Before changing system or node names for an SVC system that has servers connected to it by way of iSCSI, be aware that because the system and node name are part of the SVCs IQN, you can lose access to your data by changing these names. The SVC GUI will display a specific warning, but the CLI does not display a warning. The iSCSI session, which consists of a login phase and a full feature phase, is completed with a special command.

Chapter 2. IBM System Storage SAN Volume Controller

31

7933 02 SVC Overview - noSSD.fm

Draft Document for Review January 17, 2012 6:10 am

The login phase of the iSCSI is identical to the FC port login process (PLOGI). It is used to adjust various parameters between two network entities and to confirm the access rights of an initiator. If the iSCSI login phase is completed successfully, the target confirms the login for the initiator; otherwise, the login is not confirmed and the TCP connection breaks. As soon as the login is confirmed, the iSCSI session enters the full feature phase. If more than one TCP connection was established, then iSCSI requires that each command and response pair must go through one TCP connection. Thus, each separate read or write command will be carried out without the necessity to trace each request for passing separate flows. However, separate transactions can be delivered through separate TCP connections within one session. Figure 2-9 illustrates an overview of the various block-level storage protocols and shows where the iSCSI layer is positioned.

Figure 2-9 Overview of block-level protocol stacks

2.6.1 Use of IP addresses and Ethernet ports


The SVC node hardware has two Ethernet ports. The configuration details of the two Ethernet ports can be displayed by the GUI, CLI, or panel on the front of the node. There are two kinds of IP addresses: System management IP address This address is used for access to the SVC CLI, SVC GUI, and to the Common Information Model Object Manager (CIMOM) that runs on the SVC configuration node. Only one node, the configuration node, presents a system management IP address at any one time. There can be two system management IP addresses, one for each of the two Ethernet ports. Configuration node failover is also supported. Port IP address This address is used to perform iSCSI I/O to the system. Each node can have a port IP address for each of its ports.

32

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 02 SVC Overview - noSSD.fm

Figure 2-10 shows an overview of the IP addresses on an SVC node port and illustrates how these IP addresses are moved between the nodes of an I/O Group. The management IP addresses and the iSCSI target IP addresses will fail over to the partner node N2 if node N1 fails (and vice versa). The iSCSI target IPs will fail back to their corresponding ports on node N1 when node N1 is running again.

Figure 2-10 SVC IP address overview

It is a best practice to keep all of the eth0 ports on all of the nodes in the system on the same subnet. The same applies for the eth1 ports; however, it can be a separate subnet to the eth0 ports. In an SVC system running there is a maximum of 256 iSCSI sessions per SAN volume Controller iSCSI target. You can find detailed examples of the SVC port configuration in Chapter 9, SAN Volume Controller operations using the command-line interface on page 467 and in Chapter 10, SAN Volume Controller operations using the GUI on page 631.

2.6.2 iSCSI volume discovery


The iSCSI target implementation on the SVC nodes uses the hardware offload features that are provided by the nodes hardware. This implementation results in minimal impact on the nodes CPU load for handling iSCSI traffic, and simultaneously delivers excellent throughput (up to 95 MBps user data) on each of the two LAN ports. The use of jumbo frames (maximum transmission unit (MTU) sizes greater than 1,500 bytes) is a best practice. Hosts can discover volumes through one of the following mechanisms: Internet Storage Name Service (iSNS) SVC can register itself with an iSNS name server; you set the IP address of this server by using the svctask chcluster command. A host can then query the iSNS server for available iSCSI targets.
Chapter 2. IBM System Storage SAN Volume Controller

33

7933 02 SVC Overview - noSSD.fm

Draft Document for Review January 17, 2012 6:10 am

Service Location Protocol (SLP) The SVC node runs an SLP daemon, which responds to host requests. This daemon reports the available services on the node, such as the CIMOM service that runs on the configuration node; the iSCSI I/O service can now also be reported. iSCSI Send Target request The host can also send a Send Target request using the iSCSI protocol to the iSCSI TCP/IP port (port 3260).

2.6.3 iSCSI authentication


Authentication of the host server from the SVC system is optional and is disabled by default. The user can choose to enable Challenge Handshake Authentication Protocol (CHAP) authentication, which involves sharing a CHAP secret between the SVC system and the host. The SVC as authenticator sends a challenge message to the specific server (peer). The server responds with a value that is checked by the SVC. If there is a match, the SVC acknowledges the authentication. If not, the SVC will terminate the connection and will not allow any I/O to volumes. A CHAP secret can be assigned to each SVC host object. The host must then use CHAP authentication to begin a communications session with a node in the system. A CHAP secret can also be assigned to the system. Volumes are mapped to hosts and LUN masking is applied using the same methods used for FC LUNs. Because iSCSI can be used in networks where data security is a concern, the specification allows for separate security methods. You can set up security, for example, through a method such as IPSec, which is transparent for higher levels such as iSCSI because it is implemented at the IP level. Details regarding securing iSCSI can be found in RFC3723, Securing Block Storage Protocols over IP, which is available at this website: http://tools.ietf.org/html/rfc3723

2.6.4 iSCSI multipathing


Multipathing drivers means that the host can send commands down multiple paths to the SVC to the same volume. A fundamental multipathing difference exists between FC and iSCSI environments.
If FC-attached hosts see their FC target, and volumes go offline, for example, due to a problem in the target node, its ports, or the network, then the host has to use a separate SAN path to continue I/O. A multipathing driver is therefore always required on the host. SCSI-attached hosts see a pause in I/O when a (target) node is reset, but (this action is the key difference) the host is reconnected to the same IP target that reappears after a short period of time and its volumes continue to be available for I/O. iSCSI allows failover without host multipathing. To achieve this, the partner node in the I/O group takes over the port IP addresses and iSCSI names of a failed node. Be aware: With the iSCSI implementation in SVC, an IP address failover/failback between partner nodes of an I/O Group will only take place in cases of a planned or unplanned node restart - node offline. When the partner node returns to online status, there is a delay of 5 minutes before failback occurs for the IP addresses and iSCSI names.

34

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 02 SVC Overview - noSSD.fm

A host multipathing driver for iSCSI is required if you want these capabilities: Protecting a server from network link failures Protecting a server from network failures, if the server is connected through two separate networks Providing load balancing on the serverss network links

2.7 Advanced Copy Services overview


Advanced Copy Services are a class of functionality of storage arrays and storage devices that allow various forms of block-level data duplication. Put simply, Advanced Copy Services allow you to make mirror images of some or all of your data eventually between distant sites. This is has many benefits and uses, including, but not limited to: facilitating disaster recovery, building reporting instances to offload billing activities from production databases, building quality assurance systems on regular intervals for regression testing, offloading offline backups from production systems, and building test systems using production data. SVC supports the following copy services: Synchronous remote copy (Metro Mirror) Asynchronous remote copy (Global Mirror) Point-in-Time copy (FlashCopy) Data migration (Image Mode Migration and Volume Mirroring Migration) Copy services functions are implemented either within an SVC System (FlashCopy and Image Mode Migration) or between SVC System (Metro Mirror and Global Mirror.) Within the SVC both intracluster copy services functions (FlashCopy and Image Mode Migration) operate at the block level, while intercluster functions (Global Mirror and Metro Mirror) operate at the Volume Layer. A volume is the container that it used to present storage to host systems. Operating at this layer allows the Advanced Copy Services functions to benefit from caching at the Volume Layer and helps facilitate the asynchronous functions of Global Mirror and lessen the impact of synchronous Metro Mirror. Operating at the Volume Layer also allows Advanced Copy Services functions operate above and independently of the function or characteristics of the underlying disk subsystems used to provide storage resources to an SVC system. This means that as long as the physical storage is virtualized with an SVC or V7000 (as of 6.1.x) and the backing array is supported by the SVC or V7000 as of (6.1.x) then you can use disparate backing storage. Note: While FlashCopy operates at the block level, this is the block level of the SVC, so the physical backing storage can be anything the SVC supports. However, performance will be limited to the slowest performing storage involved in FlashCopy.

2.7.1 Synchronous/Asynchronous remote copy


Global Mirror and Metro Mirror are implemented at the volume layer within the SVC. Together they are collectively referred to as Remote Copy. In general the purpose of both functions is to maintain two copies of data. Often the two copies will be separated by distance, but not necessarily. The remote copy can be maintained in one of two modes: synchronous or asynchronous.

Chapter 2. IBM System Storage SAN Volume Controller

35

7933 02 SVC Overview - noSSD.fm

Draft Document for Review January 17, 2012 6:10 am

Metro Mirror and Global Mirror are the IBM branded terms for the functions that are synchronous remote copy and asynchronous remote copy, respectively. Synchronous remote copy ensures that updates are physically committed (not in volume cache) in both the primary and the secondary SVC Clustered systems before the application considers the updates complete; therefore, the secondary is fully up-to-date if it is needed in a failover. However, the application is fully exposed to the latency and bandwidth limitations of the communication link to the secondary. In a truly remote situation, this extra latency can have a significant adverse effect on application performance, hence there is limitation on the distance of Metro Mirror of 300 kilometers (~186 miles.) This will induce latency of approximately 5 microseconds per kilometer, which does not include latency added by the equipment in the path. The nature of synchronous remote copy is that latency for the distance and the equipment in the path will be added directly to your application I/O response times. Special configuration guidelines exist for SAN fabrics that are used for data replication. It is necessary to consider the distance and available bandwidth of the intersite links. The SVC Support Portal contains details regarding these guidelines: http://www-947.ibm.com/support/entry/portal/Overview/Hardware/System_Storage/Stora ge_software/Storage_virtualization/SAN_Volume_Controller_%282145%29 Refer to 8.6, Metro Mirror on page 410 for more details about SVC's synchronous mirroring. In asynchronous remote copy, the application is provided acknowledgement that the write is complete prior to the write actually being committed (written to backing storage) at the secondary. Thus, on a failover, certain updates (data) might be missing at the secondary. The application must have an external mechanism for recovering the missing updates or recovering to a consistent point in time (which is usually a few minutes in the past.) This mechanism can involve user intervention, but in most practical scenarios, it will need to be at least partially automated. Recovery on the secondary site involves assigning the Global Mirror targets from the SVC target system to one or more hosts (which depends on your disaster recovery design) and making those volumes visible on the host and creating any required multipath device definitions. The application must then be started and a recovery procedure to either a consistent point in time or recovery of the missing updates must be performed. This is why the initial state of Global Mirror targets is called crash consistent. This term may sound somewhat daunting, but it just means that the data on the volumes will appear to be in the same state as if an application crash had occurred. Since most applications, such as databases, have had mechanisms for dealing with this type of data state for a long time, it is a fairly mundane operation (depending upon the application). After this application recovery procedure is finished, the application will start normally. Note: When planning your Recovery Point Objective (RPO) you will need to account for application recovery procedures and the length of time they will take and the point at which they may roll back data to. This means that while Global Mirror on an SVC will provide you with typically sub-second RPO times, your effective RPO time maybe up to 5 minutes or longer, depending on the application behavior. Most clients will aim to automate failover or recovery of the remote copy through failover management software. SVC provides Simple Network Management Protocol (SNMP) traps and interfaces to enable this automation. IBM Support for automation is provided by IBM Tivoli Storage Productivity Center for Replication.

36

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 02 SVC Overview - noSSD.fm

The Tivoli documentation can also be accessed online at the IBM Tivoli Storage Productivity Center information center: http://publib.boulder.ibm.com/infocenter/tivihelp/v4r1/index.jsp

2.7.2 FlashCopy
FlashCopy is the IBM branded name for Point-in-Time (sometimes called Time-Zero, or T0) copy. This function makes a copy of the blocks on a source volume and can duplicate them on 1 to 256 target volumes. Note: When using the multiple target capability of FlashCopy, if any additional copy (C) is started while there is an existing copy in progress (B) then (C) will have a dependency on (B.) This means that if you terminate (B) then (C) will become invalid. FlashCopy works by creating one or two (for incremental operations) bitmap(s) to track changes to the data on the source volume. This bitmap is also used to present an image of the source data at the Point-in-Time the copy was taken to target host(s) while the actual data is being copied. This capability ensures that copies appear to be instantaneous. Note: In this context, bitmap refers to a special programming data structure that is used to compactly store Boolean values. Do not confuse this with the popular image file format. If your FlashCopy target(s) has existing content, it will be overwritten during the copy operation. This is also true of the no copy (copy rate 0) option where only changed data is copied. After the copy operation has started, the target volume appears to have the contents of the source volume as it existed at the Point-in-Time the copy was initiated. Although the physical copy of the data takes an amount of time that varies based on system activity and configuration, the resulting data at the target appears as though the copy was made instantaneously. FlashCopy permits the management operations to be coordinated, via a grouping of FlashCopy pairs, so that a common single Point-in-Time is chosen for copying target volumes from their respective source volumes. This capability allows a consistent copy of data for application which span multiple volumes. SVC also permits source and target volumes for FlashCopy to be thin-provisioned volumes. FlashCopies to or from thinly provisioned volumes allow duplication of data while consuming less space. These types of volumes are dependant on the change rate of the data, and typically should be used in time-limited existance scenarios, as over time they have the potential of filling the physical space they were allocated. Reverse FlashCopy enables target volumes to become restore points for the source volume without breaking the FlashCopy relationship and without having to wait for the original copy operation to complete. SVC supports multiple targets and thus multiple rollback points. In most practical scenarios the FlashCopy functionality of the SVC should be integrated into a process or procedure that allows the benefits of the Point-in-Time Copies to be leveraged to address business needs. IBM offers Tivoli Storage FlashCopy Manager for this functionality. You may read more about Tivoli Storage FlashCopy Manager at: http://www-01.ibm.com/software/tivoli/products/storage-flashcopy-mgr/ Most clients aim to integrate the FlashCopy feature for point in time copies and quick recovery of their applications and databases.

Chapter 2. IBM System Storage SAN Volume Controller

37

7933 02 SVC Overview - noSSD.fm

Draft Document for Review January 17, 2012 6:10 am

You can read a detailed description of FlashCopy copy services in Chapter 8, Advanced Copy Services on page 373.

2.7.3 Image Mode Migration and Volume Mirroring Migration


There are two methods of Advanced Copy Services available outside of the licensed Advanced Copy Services features: Image Mode Migration and Volume Mirroring Migration. Both of these abilities are included in the base software functionality of the SVC. Image Mode Migration works by establishing a one-to-one static mapping of volumes and managed disks. This allows the data on the managed disk to be presented directly through the volume layer and allows the data to be moved between volumes and the associated backing managed disks. This provides a facility to use the SVC as a migration tool, where you would otherwise have no recourse (such as migrating from Vendor A hardware to Vendor B hardware, assuming the two systems have no compatibility otherwise.) Volume Mirroring Migration is a clever use of the facility the SVC has to mirror data on a volume between two sets of storage pools. Much like the logical volume management portion of some operating systems, the SVC can provide mirroring of data transparently between two sets of physical hardware. This feature can be leveraged to move data between managed disk groups with no host I/O interruption by simply removing the original copy once the mirroring is completed. This feature is much more limited than FlashCopy and should not be used where FlashCopy is appropriate. Instead, leverage this as an infrequent use, hardware refresh aid as you now have the ability to move interruption free between your old storage system and new storage system. Note: When migrating using the Volume Mirroring Migration, your I/O rate will be limited to the slowest of the two managed disk groups involved, so it is imperative that you plan carefully to avoid impact to live systems.

2.8 SVC clustered system overview


In simple terms, a clustered system or system is a collection of servers that together provide a set of resources to a client. The key point is that the client has no knowledge of the underlying physical hardware of the system. The client is isolated and protected from changes to the physical hardware. This arrangement offers many benefits including, most significantly, high availability. Resources on clustered system act as highly available versions of unclustered resources. If a node (an individual computer) in the system is unavailable or too busy to respond to a request for a resource, then the request is transparently passed to another node that is capable of processing it. The clients are unaware of the exact locations of the resources they are using. The SVC is a collection of up to eight nodes, which are added in pairs known as I/O groups. These nodes are managed as a set (system), and they present a single point of control to the administrator for configuration and service activity. The eight-node limit for an SVC system is a limitation imposed by the microcode and is not a limit of the underlying architecture. Larger system configurations might be available in the future. The SVC demonstrated its ability to scale during a 2008 project: http://www-03.ibm.com/press/us/en/pressrelease/24996.wss Based on a 14-node cluster, coupled with solid-state drive controllers, the project achieved a data rate of over one million IOPS with a response time of under 1 millisecond (ms). 38
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 02 SVC Overview - noSSD.fm

Although the SVC code is based on a purpose-optimized Linux kernel, the clustered system feature is not based on Linux clustering code. The clustered system software used within SVC, that is, the event manager cluster framework, is based on the outcome of the COMPASS research project. It is the key element that isolates the SVC application from the underlying hardware nodes. The clustered system software makes the code portable and provides the means to keep the single instances of the SVC code running on separate systems nodes in sync. Node restarts (during a code upgrade), adding new nodes, or removing old nodes from a system or node failures therefore cannot impact the SVCs availability. It is key for all active nodes of a system to know that they are members of the system. Especially in situations such as the split-brain scenario where single nodes lose contact with other nodes, it is key to have a solid mechanism to decide which nodes form the active system. A worst case scenario is a system that splits into two separate. Within an SVC system, the voting set and a quorum disk are responsible for the integrity of the system. If nodes are added to a system, they get added to the voting set. If nodes are removed, they will also quickly be removed from the voting set. Over time the voting set, and thus the nodes in the system, can completely change so that the system has migrated onto a completely separate set of nodes from the set on which it started. The SVC clustered system implements a dynamic quorum. Following a loss of nodes, if the system can continue operation, it will adjust the quorum requirement so that further node failure can be tolerated. The lowest Node Unique ID in a system becomes the boss node for the group of nodes, and it proceeds to determine (from the quorum rules) whether the nodes can operate as the system. This node also presents the maximum two-cluster IP addresses on one or both of its nodes Ethernet ports to allow access for system management.

2.8.1 Quorum disks


The system uses the quorum disk for two purposes: as a tie breaker in the event of a SAN fault, when exactly half of the nodes that were previously members of the system are present; and to hold a copy of important system configuration data. Just over 256 MB is reserved for this purpose on each quorum disk candidate. There is only one active quorum disk in a system; however, the system uses three MDisks as quorum disk candidates. The system automatically selects the actual active quorum disk from the pool of assigned quorum disk candidates. If a tiebreaker condition occurs, then the one-half portion of the system nodes that is able to reserve the quorum disk after the split has occurred locks the disk and continues to operate. The other half stops its operation. This design prevents both sides from becoming inconsistent with each other. When MDisks are added to the SVC system, the SVC system checks the MDisk to see if it can be used as a quorum disk. If the MDisk fulfills the requirements, the SVC will assign the three first MDisks added to the system as quorum candidates. One of them is selected as the active quorum disk.

Chapter 2. IBM System Storage SAN Volume Controller

39

7933 02 SVC Overview - noSSD.fm

Draft Document for Review January 17, 2012 6:10 am

Note: To be considered eligible as a quorum disk, an LUN must meet the following criteria: It must be presented by a disk subsystem that is supported to provide SVC quorum disks. It has been manually allowed to be a quorum disk candidate using the svctask chcontroller -allow_quorum yes command. It must be in managed mode (no image mode disks). It must have sufficient free extents to hold the system state information, plus the stored configuration metadata. It must be visible to all of the nodes in the system. If possible, the SVC will place the quorum candidates on separate disk subsystems. After the quorum disk has been selected, however, no attempt is made to ensure that the other quorum candidates are presented through separate disk subsystems. Important: Quorum disk placement verification and adjustment to separate storage systems (if possible) reduces the dependency from a single storage system and can increase the Quorum disk availability significantly. Quorum disk candidates and the active quorum disk in a system can be listed by the svcinfo lsquorum command. When the set of quorum disk candidates has been chosen, it is fixed. However, a new quorum disk candidate can be chosen in one of these conditions: When the administrator requests that a specific MDisk is to become a quorum disk by using the svctask setquorum command When an MDisk that is a quorum disk is deleted from a storage pool When an MDisk that is a quorum disk changes to image mode An offline MDisk will not be replaced as a quorum disk candidate. For disaster recovery purposes a system needs to be regarded as a single entity, so the system and the quorum disk need to be colocated. There are special considerations concerning the placement of the active quorum disk for a stretched or split cluster and split I/O Group configurations. Details are available at this website: http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003311 Important: Running an SVC system without a quorum disk can seriously affect your operation. A lack of available quorum disks for storing metadata will prevent any migration operation (including a forced MDisk delete). Mirrored volumes can be taken offline if there is no quorum disk available. This behavior occurs because synchronization status for mirrored volumes is recorded on the quorum disk. During the normal operation of the system, the nodes communicate with each other. If a node is idle for a few seconds, a heartbeat signal is sent to ensure connectivity with the system. If a node fails for any reason, the workload that is intended for it is taken over by another node until the failed node has been restarted and readmitted into the system (which happens automatically). If the microcode on a node becomes corrupted, resulting in a failure, the

40

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 02 SVC Overview - noSSD.fm

workload is transferred to another node. The code on the failed node is repaired, and the node is readmitted into the system (again, all automatically).

2.8.2 Split I/O groups or split cluster


An I/O group is formed by a pair of SVC nodes. These nodes act as failover nodes for each other, and hold mirrored copies of cached volume writes. See 2.4.2, I/O Groups on page 16 for more information. Normally these nodes are physically located within the same rack, in the same computer room. Since SVC 6.3, to provide protection against failures that affect an entire location (for example, a power failure), you can split a single system between two physical locations, up to 10 km apart. In this configuration, special attention must be given to the quorum disks to ensure successful clustered system failover. Generally, when the nodes in a system have been split across sites, the SVC system must be configured as listed here: Site 1 contains half of SAN Volume Controller system nodes + one quorum disk candidate. Site 2 contains half of SAN Volume Controller system nodes + one quorum disk candidate. Site 3 contains an active quorum disk. This configuration ensures that a quorum disk is always available, even after a single site failure. All internode communication between SVC node ports in the same system must not cross ISLs. The same is also true for SVC to back-end disk controllers. This means that the FC path between sites cannot use an inter-switch ISL path. The remote node must have a direct path to the switch that its partner and other system nodes are connected to. To reach the 10 km maximum distance, Long Wave SFPs must be used in the node. Other SVC configuration rules also continue to apply, for example the Ethernet port, eth0 on every SVC node, local or remote site, must still be connected to the same subnet or subnets. For more details about split cluster configuration, see 3.3.6, Split-cluster system configuration on page 87.

2.8.3 Cache
The primary benefit of storage cache is to improve I/O response time. Reads and writes to a magnetic disk drive suffer from both seek and latency time at the drive level, which can result in from one to 10 ms of response time (for an enterprise-class disk). The new 2145-CF8 nodes combined with SVC provide 24 GB memory per node, or 48 GB per I/O Group, or 192 GB per SVC system. The SVC provides a flexible cache model, and the nodes memory can be used as read or write cache. The size of the write cache is limited to a maximum of 12 GB of the nodes memory. Dependent on the current I/O conditions on a node, the entire 24 GB of memory can be fully used as read cache. Cache is allocated in 4 KB segments. A segment will hold part of one track. A track is the unit of locking and destage granularity in the cache. The cache virtual track size is 32 KB (eight segments). A track might only be partially populated with valid pages. The SVC coalesces writes up to 256 KB track size if the writes reside in the same tracks prior to destage; for example, if 4 KB is written into a track, another 4 KB is written to another location in the same track. Therefore, the blocks written from the SVC to the disk subsystem can be any size between 512 bytes up to 256 KB.

Chapter 2. IBM System Storage SAN Volume Controller

41

7933 02 SVC Overview - noSSD.fm

Draft Document for Review January 17, 2012 6:10 am

When data is written by the host, the preferred node within the I/O Group saves the data in its cache. Before the cache returns completion to the host, the write must be mirrored to the partner node, or copied in the cache of its partner node, for availability reasons. After having a copy of the written data, the cache returns completion to the host. A volume that has not received a write updates during the last two minutes will automatically have all modified data destaged to disk. If one node of an I/O Group is missing, due to a restart or a hardware failure, the remaining node empties all of its write cache and proceeds in a operation mode, which is referred to as write-through mode. A node operating in write-through mode writes data directly to the disk subsystem before sending an I/O complete status message back to the host. Running in this mode can degrade the performance of the specific I/O Group. Write cache is partitioned by storage pool. This feature restricts the maximum amount of write cache that a single storage pool can allocate in a system. Table 2-3 shows the upper limit of write cache data that a single storage pool in a system can occupy.
Table 2-3 Upper limit of write cache per storage pool One storage pool 100% Two storage pools 66% Three storage pools 40% Four storage pools 33% More than four storage pools 25%

For in-depth information about SVC cache partitioning, it is important to read IBM SAN Volume Controller 4.2.1 Cache Partitioning, REDP-4426, which is available at this website: http://www.redbooks.ibm.com/abstracts/redp4426.html?Open An SVC node will treat part of its physical memory as non-volatile. Non-volatile means that its contents are preserved across power losses and resets. Bitmaps for Flash Copy and Remote Mirroring relationships, the Virtualization Table and the Write Cache are items in the non-volatile memory. In the event of a disruption or external power loss, the physical memory is copied to a file in the file system on the nodes internal disk drive, so that the contents can be recovered when external power is restored. The uninterruptible power supply units, which are delivered with each nodes hardware, ensure that there is sufficient internal power to keep a node operational to perform this dump when external power is removed. After dumping the content of the non-volatile part of the memory to disk, the SVC node shuts down.

2.8.4 Clustered system management


The SVC can be managed by one of the following interfaces: A text command-line interface (CLI) accessed through a Secure Shell connection (SSH), for example PuTTY. A web browser-based graphical user interface (GUI). Tivoli Storage Productivity Center (TPC) Basic Edition or Standard Edition. The basic edition is supplied with the SVC System Storage Productivity Center Console. The GUI and a web server are installed in the SVC system nodes. This means that any browser, if pointed at the system IP address, is able to access the management GUI.

42

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 02 SVC Overview - noSSD.fm

Management console
The management console for SVC is referred to as the IBM System Storage Productivity Center (SSPC). SSPC is a hardware and software solution that includes a suite of storage infrastructure management software that can centralize, automate, and simplify the management of complex and heterogeneous storage environments.

2.8.5 IBM System Storage Productivity Center


IBM System Storage Productivity Center is based on server hardware (IBM System x-based) and a set of preinstalled and optional software modules. Several of these preinstalled modules provide base functionality only. Modules providing enhanced functionality can be activated by installing separate licenses. IBM System Storage Productivity Center contains the functions listed here. Tivoli Integrated Portal IBM Tivoli Integrated Portal is a standards-based architecture for web administration. The installation of Tivoli Integrated Portal is required to enable single sign-on (SSO) for Tivoli Storage Productivity Center. Tivoli Storage Productivity Center now installs Tivoli Integrated Portal along with Tivoli Storage Productivity Center. Tivoli Storage Productivity Center IBM Tivoli Storage Productivity Center Basic Edition is preinstalled on the IBM System Storage Productivity Center server. There are several other commercially available products of Tivoli Storage Productivity Center that provide additional functionality beyond Tivoli Storage Productivity Center Basic Edition. You can activate these packages by adding the specific licenses to the preinstalled Basic Edition: Tivoli Storage Productivity Center for Disk allows you to monitor storage systems for performance. Tivoli Storage Productivity Center for Data allows you to collect and monitor file systems and databases. Tivoli Storage Productivity Center Standard Edition is a bundle that includes all of the other packages, along with SAN planning tools that make use of information that is collected from the Tivoli Storage Productivity Center components. Tivoli Storage Productivity Center for Replication The functions of Tivoli Storage Productivity Center for Replication provide the management of the IBM FlashCopy, Metro Mirror, and Global Mirror capabilities for the DS8000, IBM SAN Volume Controller and others. This package can also be activated by installing the specific licenses. Web Browser to access the GUI SSH Client (PuTTY) DS CIM agents Windows Server 2008 Enterprise Edition Several base software packets that are required for Tivoli Productivity Center Optional software packages, such as anti-virus software or DS3000/4000/5000 Storage Manager, can be installed on the IBM System Storage Productivity Center server by the client. Using Tivoli Storage Productivity Center or IBM System Director provides greater integration points and launch in-context capabilities.

Chapter 2. IBM System Storage SAN Volume Controller

43

7933 02 SVC Overview - noSSD.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 2-11 on page 44 provides an overview of the SVC management components. We describe the details in Chapter 4, SAN Volume Controller initial configuration on page 105. You can obtain further details about the IBM System Storage Productivity Center in IBM System Storage Productivity Center Users Guide Version 1 Release 4, SC27-2336, and in IBM System Storage Productivity Center Introduction and Planning Guide, SC23-8824.

Figure 2-11 SVC management overview

2.9 User authentication


SVC provides two different methods of user authentication to control access to the web-based management interface (GUI) respectively the command line interface (CLI), local and remote.

Local authentication is performed within the SVC system


The local CLI authentication methods available are Secure Shell (ssh) key authentication and, newly introduced with release SVC 6.3, username and password. The CLI setup is explained in more detail in 4.5, Secure Shell overview on page 133. Local GUI authentication is done via user name and password. GUI setup is discussed in 4.4, Configuring the GUI on page 118.

Remote authentication means, the validation of a users permission to access the SVCs
management CLI/GUI is performed an a remote authentication server. That is, except for the superuser account, there is no need to administer local user accounts on the SVC. An existing user management in your environment can be used to control SVC user access, implementing a single sign-on solution for the SVC hereby.

2.9.1 Remote authentication via LDAP


Until SVC 6.2 the only remote authentication service supported was the Tivoli Embedded Security Services, part of the Tivoli Integrated Portal (TIP). Beginning with SVC 6.3 remote

44

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 02 SVC Overview - noSSD.fm

authentication via native LDAP was introduced. Supported types of LDAP servers are IBM Tivoli Directory Server, Microsoft Active Directory (MS AD) and OpenLDAP, for example running on a Linux system. Users authenticated by an LDAP servers can log on to the SVC web-based GUI and the CLI; unlike with remote authentication via Tivoli TIP, users do not need to be configured locally for CLI access. An SSH key is not required for CLI login in this scenario either. However, locally administered users can co-exist with remote authentication enabled. The default administrative user superuser must be a local user, it neither can be deleted nor manipulated except for password and/or SSH key. Multiple LDAP servers can be defined if available for availability reasons. Authentication requests are processed by those LDAP servers marked as preferred unless the connections fail or a user is not found. Requests are distributed across all preferred servers for load balancing in a round-robin fashion. A user, that is authenticated remotely by an LDAP server is granted permissions on the SVC according to the role assigned to the group it is a member of. That is, any SVC user group with its assigned role - for example CopyOperator - must exist with an identical name on the SVC system and on the LDAP server, if users in that role are to be authenticated remotely. Prerequisites: Either native LDAP authentication or Tivoli TIP may be selected, but not both If more than one LDAP server is defined, they all must be of the same type, e.g. MS AD The SVC user group must be enabled for remote authentication The user group name must be identical in the SVC user group management and on the LDAP server, and it is case-sensitive The LDAP server must transmit a group membership attribute for the user, the default attribute name for MS AD and OpenLDAP is memberOf, for Tivoli Directory Server it is ibm-allGroups. For OpenLDAP implementations it might be necessary to configure the memberOf overlay if its not in place In the following example we will demonstrate LDAP user authentication using a Microsoft Windows Server 2008 R2 domain controller acting as LDAP server. The first step is to configure the SVC for Remote Authentication in Settings > Directory Services as shown in Figure 2-12.

Figure 2-12 Configure Remote Authentication

Click on Configure Remote Authentication and select the authentication type, shown in Figure 2-13 on page 46. Check LDAP and click Next.

Chapter 2. IBM System Storage SAN Volume Controller

45

7933 02 SVC Overview - noSSD.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 2-13 Select authentication type

In step 2, shown in Figure 2-14 several parameters have to be configured: LDAP Type: select Microsoft Active Directory; the type of LDAP server to use for an OpenLDAP server would be Other Security: choose None respectively Transport Layer Security if your LDAP server requires a secure connection; the LDAP servers certificate will be configured later Click on Advanced Settings to expand the bottom part: leave the User Name and Password field empty, if your LDAP server supports anonymous bind. For our MS AD server enter the credentials of an existing user on the LDAP server with permission to query the LDAP directory. It can be entered either in the format of an email address, e.g. administrator@itso.corp, or in the distinguished format, e.g. cn=Administrator,cn=users,dc=itso,dc=corp. Note the common name portion cn=users for MS AD servers. In case your LDAP server uses different Attributes than the predefined ones, they can be edited here. There should be no need to edit them when MS AD is used as LDAP service.

Figure 2-14 Configure Remote Authentication step 2 of 3

Figure 2-15 on page 47 shows step 3, where the LDAP server details are configured. 1. Enter the IP Address of at least one LDAP server 46
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 02 SVC Overview - noSSD.fm

2. Even though it is marked as optional, it may be required to enter a Base DN in the distinguished name format, which defines the starting point in the directory where to search for users, for example dc=itso,dc=corp 3. Additional LDAP servers can be added by clicking the green plus icon. 4. Check the Preferred LDAP servers to be used if desired. 5. Click Finish to save the settings.

Figure 2-15 LDAP server configuration

Now that we have enabled and configured the SVC for Remote Authentication, we will take care of the user groups. For remote authentication through LDAP no local SVC users have to be maintained, but the user groups have to be set up properly. The existing built-in SVC user groups may be used as well as groups created in the SVC user management. However, using self-defined groups might be advisable to avoid SVC default groups interfering with already existing group names on the LDAP server. Any user group, built-in or self-defined, has to be enabled for remote authentication. As shown in Figure 2-16 on page 47 we create a new user group in Access > Users > New User Group:

Figure 2-16 Create a new user group

1. Assign a meaningful Group Name, e.g. SVC_LDAP_CopyOperator according to its intended role 2. Select the desired Role: Copy Operator 3. Mark LDAP - Enable for this group and click on Create These settings can be modified in a groups properties at any time. Next we create a group with exactly the same name on the LDAP server, that is in the Active Directory Domain:

Chapter 2. IBM System Storage SAN Volume Controller

47

7933 02 SVC Overview - noSSD.fm

Draft Document for Review January 17, 2012 6:10 am

1. On the Domain Controller launch the Active Directory Users and Computers management console, navigate in your domain structure to the entity containing the user groups. Click on the button shown in Figure 2-17 to create a new group.

Figure 2-17 Create new user group on the LDAP server

2. Enter exactly the same name - it is case sensitive - in the Group Name field, shown in Figure 2-18 on page 48. Select the correct Group scope for your environment and select Security for Group type and click on OK.

Figure 2-18 Edit the group properties

3. Edit the users properties, which shall be able to logon to the SVC, and make it a Member Of the appropriate user group for the intended SVC role, shown in Figure 2-19, and click OK to save and apply the settings.

48

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 02 SVC Overview - noSSD.fm

Figure 2-19 Make the user member of the appropriate group

At this point we are ready for the authentication of users for the SVC through the remote server. To make sure that everything will work properly some basic functionality tests should be made to verify the communication between SVC and the configured LDAP service: 1. In the Settings > Directory Services screen select Global Actions > Test LDAP connections, shown in Figure 2-20.

Figure 2-20 LDAP Connections Test

The result of a successful connection test is shown in Figure 2-21.

Chapter 2. IBM System Storage SAN Volume Controller

49

7933 02 SVC Overview - noSSD.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 2-21 LDAP connection test successful

2. In the next step a real user authentication attempt will be tested. In the Settings > Directory Services screen select Global Actions > Test LDAP connections as shown in Figure 2-22.

Figure 2-22 LDAP Authentication Test

3. As shown in Figure 2-23, enter the User Credentials of a user which was defined on the LDAP server and click on Test.

Figure 2-23 LDAP Authentication Test

Again, a message will be displayed after a successful test: CMMVC7148I Task completed successfully. Both, the LDAP connection and authentication test must have completed successfully to ensure the LDAP authentication will work properly. In case, an error message points to user authentication problems during the LDAP authentication test, it may be helpful, to analyze the LDAP servers response outside the SVC. This can be done using any native LDAP query

50

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 02 SVC Overview - noSSD.fm

tool, for example the free software LDAPBrowser or, for a pure MS AD environment, Microsoft Sysinternals ADExplorer. These tools are available at: http://www.ldapbrowser.com/ http://technet.microsoft.com/en-us/sysinternals/bb963907 In the example output of LDAP Browser in Figure 2-24 the first Common Name (CN) component of the memberOf attribute must match the SVC user groups name created earlier: SVC_LDAP_CopyOperator.

Figure 2-24 LDAP Browser for troubleshooting authentication problems

Assuming that the LDAP connection and the authentication test succeeded, users are able to logon to the SVC GUI and CLI using their network credentials, for example, their Windows domain user name and password. Figure 2-25 shows the WebGUI logon screen with the Windows domain credentials entered. A user can login with either its short name (that is without the domain component) or with the fully qualified username in the form of an e-mail address:

Chapter 2. IBM System Storage SAN Volume Controller

51

7933 02 SVC Overview - noSSD.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 2-25 GUI login using

After a successful login the user name is displayed in a welcome message at the top of the screen as shown in Figure 2-26.

Figure 2-26 Welcome message after successful login

Also, CLI login is possible with either the short username or the fully qualified name. Figure 2-27 shows the CLI login using PuTTY authenticated remotely. The CLI command lscurrentuser displays the user name of the currently logged in user and also its role.

Figure 2-27 CLI login with remote authentication

52

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 02 SVC Overview - noSSD.fm

2.9.2 SVC user names


User names must be unique and can contain up to 256 printable ASCII characters: Forbidden characters are single quotation mark (), colon (:), percent symbol (%), asterisk (*), comma (,), and double quotation marks (). A user name cannot begin or end with a blank. Passwords for local users can be up to 64 printable ASCII characters. There are no forbidden characters, but passwords cannot begin or end with blanks.

2.9.3 SVC superuser


There is a special local user called the superuser that always exists on every system. It cannot be deleted. Its password is set by the user during clustered system initialization. The superuser password can be reset from the nodes front panel, and this function can be disabled, although doing this makes the system inaccessible if all of the users forget their passwords or lose their SSH keys. To register an SSH key for the superuser to provide command-line access, use Service Assistant Configure CLI Access to assign a temporary key. However, the key will be lost during a node restart so the more permanent way is to add the key through the normal GUI, that is, use the User Management superuser Properties panels. The superuser is always a member of user group 0, which has the most privileged role within the SVC.

2.9.4 SVC Service Assistant Tool


SVC has a tool for performing service tasks on the system. As well as being able to perform various service tasks from the front panel, you can also service a node through an Ethernet connection using a web browser to access a GUI interface. The function is called the Service Assistant Tool and requires you to enter the superuser password during login.

2.9.5 SVC roles and user groups


Each user group is associated with a single role. The role for a user group cannot be changed, but additional new user groups (with one of the defined roles) can be created. User groups are used for local and remote authentication. Because SVC knows of five roles, there are, by default, five user groups defined in an SVC system; see Table 2-4.
Table 2-4 User groups User group ID 0 1 2 3 4 User group SecurityAdmin Administrator CopyOperator Service Monitor Role SecurityAdmin Administrator CopyOperator Service Monitor

Chapter 2. IBM System Storage SAN Volume Controller

53

7933 02 SVC Overview - noSSD.fm

Draft Document for Review January 17, 2012 6:10 am

The access rights for a user belonging to a specific user group are defined by the role that is assigned to the user group. It is the role that defines what a user can or cannot do on an SVC system. Table 2-5 on page 54 shows the roles ordered (from the top) by starting with the least privileged Monitor role down to the most privileged SecurityAdmin role. There is no special user group for the NasSystem role.
Table 2-5 Commands permitted for each role Role Monitor Allowed commands All svcinfo or informational commands, plus: svctask finderr, dumperrlog, dumpinternallog,chcurrentuser, ping, svcconfig backup svqueryclock All commands allowed for Monitor role, plus: applysoftware, setlocale, addnode, rmnode, cherrstate,writesernum, detectmdisk, includemdisk, clearerrlog, cleardumps, settimezone, stopcluster,startstats, stopstats, and setsystemtime All commands allowed for Monitor role, plus: prestartfcconsistgrp, startfcconsistgrp, stopfcconsistgrp, chfcconsistgrp, prestartfcmap, startfcmap, stopfcmap, chfcmap, startrcconsistgrp, stoprcconsistgrp, switchrcconsistgrp, chrcconsistgrp, startrcrelationship, stoprcrelationship, switchrcrelationship,chrcrelationship, and chpartnership All commands, except: chauthservice, mkuser, rmuser, chuser, mkusergrp,rmusergrp, chusergrp, and setpwdreset All commands except those allowed by the NasSystem role svctask: addmember, activatemember, expelmember Create and delete filesystem VDisks.

Service

CopyOperator

Administrator

SecurityAdmin NasSystem

2.9.6 SVC local authentication


Local users are users that are managed entirely on the clustered system without the intervention of a remote authentication service. Local users must have a password or an SSH public key, or both. Key authentication is attempted first with password as a fallback. Either the password and the SSH key are used for command-line or file transfer (SecureCopy) access. For GUI access only the password is used.
Local users: Be aware that local users are created per each SVC system. Each user has a name, which must be unique across all users in one system. If you want to allow access for a user on multiple systems, you have to define the user in each system with the same name and the same privileges. A local user always belongs to only one user group. Figure 2-28 on page 55 shows an overview of local authentication within the SVC.

54

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 02 SVC Overview - noSSD.fm

Figure 2-28 Simplified overview of SVC local authentication

2.9.7 SVC remote authentication and single sign-on


You can configure an SVC system to use a remote authentication service. Remote users are users that are managed by the remote authentication service and require command-line or file-transfer access. Remote users only have to be defined in the SVC system if command-line access is required. No local user is required for GUI-only remote access. For command-line access, the remote authentication flag has to be set and its password have to be defined for the user. Remember that for users requiring CLI access with remote authentication, the password must be defined locally for the users. Remote users cannot belong to any user group, because the remote authentication service, for example, a Lightweight Directory Access Protocol (LDAP) directory server such as IBM Tivoli Directory Server or Microsoft Active Directory, will deliver the user group information. Figure 2-29 on page 56 gives an overview of SVC remote authentication.

Chapter 2. IBM System Storage SAN Volume Controller

55

7933 02 SVC Overview - noSSD.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 2-29 Simplified overview of SVC remote authentication

The authentication service supported by SVC is the Tivoli Embedded Security Services server component level 6.2. The Tivoli Embedded Security Services server provides the following key features: Tivoli Embedded Security Services isolates the SVC from the actual directory protocol in use, which means that the SVC communicates only with Tivoli Embedded Security Services to get its authentication information. The type of protocol that is used to access the central directory or the kind of the directory system that is used is transparent to SVC. Tivoli Embedded Security Services provides a secure token facility that is used to enable single sign-on (SSO). SSO means that users do not have to log in multiple times when using what appears to them to be a single system. It is used within Tivoli Productivity Center. When SVC access is launched from within Tivoli Productivity Center, the user will not have to log on to the SVC, because the user has already logged in to Tivoli Productivity Center.

Using a remote authentication service


Follow these steps to use SVC with a remote authentication service. 1. Configure the system with the location of the remote authentication server. Change settings using the following command: svctask chauthservice....... View current settings using the following command: svcinfo lscluster....... SVC supports either an HTTP or HTTPS connection to the Tivoli Embedded Security

56

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 02 SVC Overview - noSSD.fm

Services server. If the HTTP option is used, the user and password information is transmitted in clear text over the IP network. 2. Configure user groups on the system matching those user groups that are used by the authentication service. For each group of interest that is known to the authentication service, there must be an SVC user group with the same name and the remote setting enabled. For example, you can have a group called sysadmins, whose members require the SVC Administrator role. Configure this group using the following command: svctask mkusergrp -name sysadmins -remote -role Administrator If none of a users groups match any of the SVC user groups, the user is not permitted to access the system. 3. Configure users that do not require SSH access. Any SVC users that are to be used with the remote authentication service and do not require SSH access need to be deleted from the system. The superuser cannot be deleted; it is a local user and cannot use the remote authentication service. 4. Configure users that do require SSH access. Any SVC users that are to be used with the remote authentication service and do require SSH access must have their remote setting enabled and the same password set on the system and the authentication service. The remote setting instructs SVC to consult the authentication service for group information after the SSH key authentication step to determine the users role. The need to configure the users password on the system in addition to the authentication service is due to a limitation in the Tivoli Embedded Security Services server software. 5. Configure the system time. For correct operation, both the SVC system and the system running the Tivoli Embedded Security Services server must have the exact same view of the current time; the easiest way is to have them both use the same Network Time Protocol (NTP) server. Failure to follow this step can lead to poor interactive performance of the SVC user interface or incorrect user-role assignments. Also, Tivoli Storage Productivity Center leverages the Tivoli Integrated Portal infrastructure and its underlying WebSphere Application Server capabilities to make use of an LDAP registry and enable single sign-on (SSO). You can obtain more information about implementing SSO within Tivoli Storage Productivity Center 4.1 in the chapter about LDAP authentication support and single sign-on in IBM Tivoli Storage Productivity Center V4.1 Release Guide, SG24-7725, which is available at this website: http://www.redbooks.ibm.com/redpieces/abstracts/sg247725.html?Open

2.10 SVC hardware overview


The hardware nodes, as defined in the underlying COMPASS architecture, are based on Intel processors with standard PCI-X adapters to interface with the SAN and the LAN.

Chapter 2. IBM System Storage SAN Volume Controller

57

7933 02 SVC Overview - noSSD.fm

Draft Document for Review January 17, 2012 6:10 am

Note: Since SVC v6.2 and with the 2145-CG8 hardware, the IBM System Storage SAN Volume Controller Storage Engine offers 10 Gigabit Ethernet connectivity. For more information about this topic, see:
http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=an&subtype=ca&appname=gpateam &supplier=897&letternum=ENUS111-083

This solution includes a Common Information Model (CIM) Agent to enable unified storage management based on open standards for units that comply with CIM Agent standards. The new SVC 2145-CG8 Storage Engine has the following key hardware features: Intel Core i7 Xeon 5500 2.53 GHz quad-core processor (Westmere) 24 GB memory base, and up to 128 GB Four 2/4/8 Gbps FC ports Up to four solid-state drives, enabling scale-out high performance solid-state drive support Two, redundant power supplies A 19-inch rack-mounted enclosure IBM Systems Director Active Energy Manager-enabled 1U high The 2145-CG8 nodes can be easily integrated within existing SVC clustered systems. The nodes can be intermixed in pairs within existing SVC systems. Mixing node types in a system results in volume performance characteristics dependant on the node type in the volumes I/O Group. The standard nondisruptive clustered system upgrade process can be used to replace older engines with new 2145-CG8 engines, see IBM SAN Volume Controller Software Installation and Configuration Guide, GC27-2286, for more information about this topic. Refer to the following link for integration into existing clustered systems, compatibility and interoperability with installed nodes and UPSs: http://www-01.ibm.com/support/docview.wss?uid=ssg1S1002999 The 2145-CG8 is shipped with preloaded v6.2 software. Figure 2-30 shows the front-side view of the SVC 2145-CG8 node.

Figure 2-30 SVC 2145-CG8 storage engine

Remember that several SVC features, such as iSCSI, are software features and are therefore available on all nodes types running SVC V5.1 or above.

58

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 02 SVC Overview - noSSD.fm

2.10.1 Fibre Channel interfaces


The IBM SAN Volume Controller provides link speeds of 2/4/8 Gbps on SVC 2145-CG8 nodes. The nodes come with a 4-port HBA. The FC ports on these node types auto-negotiate the link speed that is used with the FC switch. The ports normally operate at the maximum speed that is supported by both the SVC port and the switch. However, if a large number of link errors occur, the ports might operate at a lower speed than what is supported. The actual port speed for each of the four ports can be displayed through the GUI, the CLI, the nodes front panel, and also by light-emitting diodes (LEDs) that are placed at the rear of the node. For details, consult the node-specific SVC hardware installation guides: IBM System Storage SAN Volume Controller Model 2145-CG8 Hardware Installation Guide, GC27-3923 IBM System Storage SAN Volume Controller Model 2145-8F8 Hardware Installation Guide, GC52-1356 The SVC imposes no limit on the FC optical distance between SVC nodes and host servers. FC standards, along with small form-factor pluggable optics (SFP) capabilities and cable type, dictate the maximum FC distances that are supported. If longwave SFPs are used in the SVC nodes, the longest supported FC link between the SVC and switch is 10Km (6.21 miles). Table 2-6 shows the actual cable length that is supported with shortwave SFPs.
Table 2-6 Overview of supported cable length FC-O OM1 (M6) standard 62.2/125 microseconds 150 m 70 m 21 m OM2 (M5) standard 50/125 microseconds 300 m 150 m 50 m OM3 (M5E) optimized 50/125 microseconds-300 500 m 380 m 150 m

2 Gbps FC 4 Gbps FC 8 Gbps FC limiting

Table 2-7 shows the rules that apply with respect to the number of interswitch link (ISL) hops allowed in a SAN fabric between SVC nodes or the system.
Table 2-7 Number of supported ISL hops Between nodes in an I/O Group 0 (connect to the same switch) Between nodes in separate I/O Groups 0 (connect to the same switch) Between nodes and the disk subsystem 1 (recommended: 0, connect to the same switch) Between nodes and the host server Maximum 3

2.10.2 LAN interfaces


The 2145-CG8 node has two 1 Gbps LAN ports available. Also this node supports 10Gbps ethernet ports that can only be used for iSCSI I/O.

Chapter 2. IBM System Storage SAN Volume Controller

59

7933 02 SVC Overview - noSSD.fm

Draft Document for Review January 17, 2012 6:10 am

The system configuration node can be accessed on either eth0 or eth1. The system can have two IPv4 and two IPv6 addresses that are used for configuration purposes (CLI or CIMOM access). The clustered system can therefore be managed by SSH clients or GUIs on System Storage Productivity Centers on separate physical IP networks. This capability provides redundancy in the event of a failure of one of these IP networks. Support for iSCSI introduces one additional IPv4 and one additional IPv6 address for each SVC node port; these IP addresses are independent of the system configuration IP addresses. See Figure 2-10 on page 33 for an IP address overview.

2.11 Solid-state drives


Solid-state drives can be used, or more specifically, single layer cell (SLC) or multilayer cell (MLC) NAND Flash-based disks (for the sake of simplicity, they are referred to as solid-state drives elsewhere in this book), to overcome a growing problem that is known as the memory/storage bottleneck.

2.11.1 Storage bottleneck problem


The memory/storage bottleneck describes the steadily growing gap between the time required for a CPU to access data located in its cache/memory (typically in nanoseconds) and data located on external storage (typically in milliseconds). Although CPUs and cache/memory devices continually improve their performance, this is not true in general for mechanical disks that are used as external storage. Figure 2-31 illustrates these access time differences.

Figure 2-31 The memory/storage bottleneck

The actual times shown are not that important, but note the dramatic difference between accessing data that is located in cache and data that is located on external disk. We have added a second scale to Figure 2-31, which gives you an idea of how long it takes to access the data in a scenario where a single CPU cycle takes 1 second. This scale gives you an idea of the importance of future storage technologies closing or reducing the gap between

60

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 02 SVC Overview - noSSD.fm

access times for data stored in cache/memory versus access times for data stored on a external medium. Since magnetic disks were first introduced by IBM in 1956 (RAMAC), they have shown a remarkable performance regarding capacity growth, form factor/size reduction, price decrease ($/GB), and reliability. However, the number of I/Os that a disk can handle and the response time that it takes to process a single I/O have not improved at the same rate, although they have certainly improved. In actual environments, we can expect from todays enterprise-class FC serial-attached SCSI (SAS) disk up to 200 IOPS per disk with an average response time (a latency) of approximately 6 ms per I/O. To summarize, todays rotating disks continue to advance in capacity (several TBs), form factor/footprint (3.5 inches, 2.5 inches, and 1.8 inches), and price ($/GB), but they are not getting much faster. The limiting factor is the number of revolutions per minute (RPM) that a disk can perform (say 15,000). This factor defines the time that is required to access a specific data block on a rotating device. There will likely be small improvements in the future, but a big step, such as doubling the RPM, if technically even possible, inevitably has an associated increase in power consumption and a price that will be an inhibitor.

2.11.2 Solid-state drive solution


Solid-state drives can provide a solution for this dilemma. No rotating parts mean improved robustness and lower power consumption. A remarkable improvement in I/O performance and a massive reduction in the average I/O response times (latency) are the compelling reasons to use solid-state drives in todays storage subsystems. Enterprise-class solid-state drives deliver typically 50,000 read and 20,000 write IOPs with latencies of typically 50us for reads and 800us for writes. Their form factors (2.5 inches/3.5 inches) and their interfaces (FC/SAS/Serial Advanced Technology Attachment (SATA)) make them easy to integrate into existing disk shelves.

2.11.3 Solid-state drive market


The solid-state drive storage market is rapidly evolving. The key differentiator among todays solid-state drive products that are available on the market is not the storage medium, but the logic in the disk internal controllers. The top priorities in todays controller development are: optimally handling what is referred to as wear-out leveling, which defines the controllers capability to ensure a devices durability; and closing the remarkable gap between read and write I/O performance. Todays solid-state drive technology is only a first step into the world of high performance persistent semiconductor storage. A group of the approximately 10 most promising technologies are collectively referred to as Storage Class Memory (SCM).

Storage Class Memory


SCM promises a massive improvement in performance (IOPS), areal density, cost, and energy efficiency compared to todays solid-state drive technology. IBM Research is actively engaged in these new technologies. You can obtain details of nanoscale devices at this website: http://www.almaden.ibm.com/st/nanoscale_st/nano_devices/
Chapter 2. IBM System Storage SAN Volume Controller

61

7933 02 SVC Overview - noSSD.fm

Draft Document for Review January 17, 2012 6:10 am

You can obtain details of Storage Class Memory at this website: http://tinyurl.com/plk7as You can read a comprehensive and worthwhile overview of the solid-state drive technology in a subset of the well known Spring 2010 and 2009 SNIA Technical Tutorials, which are available on the SNIA website: http://www.snia.org/education/tutorials/2010/spring#solid When these technologies become a reality, it will fundamentally change the architecture of todays storage infrastructures.

2.11.4 Solid-state drives and SVC


The IBM San Volume Controller supports using either internal or external solid-state drives (SSDs).

Internal SSD
Some SVC models support 2.5 inches solid-state drives as internal storage. A maximum of 4 drives can be installed per node and up to 32 drives in a clustered system. These drives can be used to create RAID managed disks that in turn can be used to create volumes. Internal solid-state drives can be configured in the following two RAID levels: RAID-1/10: In this configuration one half of the mirror will be in each node of the I/O group providing redundancy in case of a node failure. RAID-0: In this configuration all the drives will be assigned to the same node. This configuration is intended to be used with VDisk Mirroring since no redundancy is provided in case of a node failure.

External SSD
The SVC is able to manage solid-state drives in externally attached storage controllers or enclosures. The solid-state drives would be configured as an array with a LUN and be presented to the SVC as a normal MDisk. The solid-state MDisk tier then needs to be set by the chmdisk -tier generic_ssd command or the GUI. The SSD MDisks can then be placed into a single SSD tier storage pool and high workload volumes manually selected and placed into the pool to gain the performance benefits of SSDs. For a more effective use of SSDs, place the SSD MDisks into a multitiered storage pool combined with HDD MDisks (generic_hdd tier). Then, with Easy Tier turned on, it will automatically detect and migrate high workload extents onto the solid-state MDisks.

2.12 Easy Tier


Determining the amount of data activity in an SVC extent and when to move the extent to an appropriate storage performance tier is usually too complex a task to manage manually. Easy Tier is a performance optimization function that overcomes this issue. It will automatically migrate or move extents belonging to a volume to or from one MDisk storage tier to another MDisk storage tier.

62

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 02 SVC Overview - noSSD.fm

Easy Tier monitors the host I/O activity and latency on the extents of all volumes with the Easy Tier function turned on in a multitier storage pool over a 24-hour period. It then creates an extent migration plan based on this activity, and will dynamically move high activity or hot extents to a higher tier within the storage pool. It will also move extents whose activity has dropped off or cooled from the high tier MDisks back to a lower tiered MDisk. Because this migration works at the extent level and not at the volume level, it is often referred to as sub-LUN migration. The Easy Tier function may be turned on or off at the storage pool and volume level.

2.12.1 Evaluation mode


To experience the potential benefits of using Easy Tier in your environment before actually installing expensive solid-state disks (SSDs), you can turn on the Easy Tier function for a single level storage pool. Next, turn on the Easy Tier function for the volumes within that pool. Easy Tier will then start monitoring activity on the volume extents in the pool. Easy Tier will create a migration report every 24 hours on the number of extents that would be moved if the pool were a multitiered storage pool. So even though Easy Tier extent migration is not possible within a single tier pool, the Easy Tier statistical measurement function is available. The usage statistics file can be offloaded from the SVC configuration node using the GUI (Settings Support). Then you can use the Storage Advisor Tool to create the statistics report. A web browser is used to view the output of the STAT (Storage Advisor Tool). Contact your IBM representative or IBM Business Partner for more information about the Storage Advisor Tool.

2.12.2 Automatic data placement mode


For Easy Tier to provide automatic extent migration, you need to have a storage pool that contains MDisks with separate disk tiers, thus a multitiered storage pool. Then you need to set the -easytier parameter to on or auto for the storage pool and on for the volumes. The volumes must be either striped or mirrored for Easy Tier to migrate extents. See Chapter 7, Easy Tier on page 349 for more details about Easy Tier operation and management.

2.13 What is new with SVC 6.3


This section highlights the new features that SVC 6.3 brings.

2.13.1 SVC 6.3 supported hardware list, device driver, and firmware levels
With the SVC 6.3 release, as in every release, IBM offers functional enhancements and new hardware that can be integrated into existing or new SVC systems and also interoperability enhancements or new support for servers, SAN switches, and disk subsystems. See the most current information at this website: http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html

Chapter 2. IBM System Storage SAN Volume Controller

63

7933 02 SVC Overview - noSSD.fm

Draft Document for Review January 17, 2012 6:10 am

2.13.2 SVC 6.3.0 new features


The following list summarizes the new features, most of which have been described previously: Extended distance stretched cluster SVC V6.3.0 now enables enterprises to access and share a consistent view of data simultaneously across data centers, and to relocate data across disk array vendors and tiers, both inside and between data centers at full metro distances. This is accomplished by extending the supported distance between SVC nodes in a stretched cluster (split I/O group) configuration. The supported distance will depend on application latency restrictions. For specified operating environment information, visit the Information Center from the support website
http://www.ibm.com/storage/support/2145

In this stretched cluster configuration, SVC enables a highly-available stretched volume to be concurrently accessed by servers at both data centers. When combined with server data mobility functions such as VMware vMotion or PowerVM Live Partition Mobility, SVC stretched cluster enables nondisruptive storage and virtual machine mobility between the two data centers. Depending on application performance requirements, SVC stretched clusters may be deployed between data centers up to 300 km apart. SAN Volume Controller stretched clusters may be combined with SVC Metro Mirror or Global Mirror to support a third data center for applications that require both high availability and disaster recovery in a single solution Round-robin data paths to attached storage System performance improvements are available with improved management of the I/O paths to attached storage systems. This round-robin method allows more flexibility for data paths and provides greater performance, especially in the event that a data path goes down. Lower-bandwidth global mirror Customers who wish to use the global mirror capability with SAN Volume Controller can now do so on a lower-bandwidth link between sites. Remote mirroring with the SAN Volume Controller now supports higher recovery point objective (RPO) times by allowing the data at the disaster recovery site to get further out of sync with the production site if the communication link limits replication, and then approaches synchronicity again when the link is not as busy. This lower-bandwidth remote mirroring uses space efficient FlashCopy targets as sources in remote copy relationships to increase the time allowed to complete a remote copy data cycle. Remote mirroring between SVC and Storwize V7000 Customers have greater flexibility in their expanding environments using both Storwize V7000 and SAN Volume Controller with the ability now to remote mirror from one system to the other. Remote deployments for disaster recovery for current SAN Volume Controller environments can easily be fitted with the Storwize V7000 or vice versa. This new function does not affect how Metro/Global Mirror on the SVC or Remote Mirroring on the Storwize V7000 is licensed. You must still license usage for volumes replicated at the source and the target. Because of a difference in metrics, SVC mirroring can be licensed for a subset of the total storage virtualized, but Storwize V7000 mirroring is licensed for the entire storage system. Added interoperability SVC now supports many more heterogeneous data center environments with the addition of interoperability support for:

64

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 02 SVC Overview - noSSD.fm

RedHat Enterprise Linux 6 VMware vSphere 5 IBM XIV Gen3 Bull StoreWay models 1500, 2000, 3000 Fujitsu ETERNUS models DX80 S2, DX90 S2, DX410 S2, and DX440 S2 HP 3PAR models F200, F400, T400, and T800 Texas Memory Systems RamSan-440 Violin Flash Memory Array models 3140 and 3200 For all the specific models and host environments supported, visit http://www.ibm.com/storage/support/2145

2.14 Useful SVC web links


The SVC Support Page is at the following website: http://www-947.ibm.com/systems/support/supportsite.wss/selectproduct?taskind=4&bra ndind=5000033&familyind=5329743&typeind=0&modelind=0&osind=0&psid=sr&continue.x=1 The SVC Home Page is at the following website: http://www-03.ibm.com/systems/storage/software/virtualization/svc/ The SVC Interoperability Page is at the following website: http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html SVC online documentation is at the following website: http://publib.boulder.ibm.com/infocenter/svc/ic/index.jsp lBM Redbooks publications about SVC are available at the following website: http://www.redbooks.ibm.com/cgi-bin/searchsite.cgi?query=SVC

Chapter 2. IBM System Storage SAN Volume Controller

65

7933 02 SVC Overview - noSSD.fm

Draft Document for Review January 17, 2012 6:10 am

66

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 03 Planning and Configuration Massimo.fm

Chapter 3.

Planning and configuration


In this chapter we describe the steps that are required when you plan the installation of an IBM System Storage SAN Volume Controller (SVC) in your storage network. We look at the implications for your storage network and also discuss performance considerations.

Copyright IBM Corp. 2011. All rights reserved.

67

7933 03 Planning and Configuration Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

3.1 General planning rules


Important: At the time of writing the statements we make are correct, but they may change over time. Always verify any statements made in this book with the SAN Volume Controller Supported Hardware List, Device Driver, Firmware and Recommended Software Levels at: http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003907 To achieve the most benefit from the SVC, preinstallation planning must include several important steps. These steps will ensure that SVC provides the best possible performance, reliability, and ease of management for your application needs. Proper configuration also helps minimize downtime by avoiding changes to the SVC and the storage area network (SAN) environment to meet future growth needs. Tip: For comprehensive information about the topics discussed here, see IBM System Storage SAN Volume Controller: Planning Guide, GA32-0551. We also go into much more depth about these topics in SAN Volume Controller Best Practices and Performance Guidelines, SG24-7521, which is available at this website: http://www.redbooks.ibm.com/abstracts/sg247521.html?Open Follow these steps when planning for the SVC: 1. Collect and document the number of hosts (application servers) to attach to the SVC, the traffic profile activity (read or write, sequential or random), and the performance requirements (I/O per second (IOPS)). 2. Collect and document the storage requirements and capacities: The total back-end storage already present in the environment to be provisioned on the SVC The total back-end new storage to be provisioned on the SVC The required virtual storage capacity that is used as a fully managed virtual disk (volume) and used as a Space-Efficient volume The required storage capacity for local mirror copy (volume mirroring) The required storage capacity for point-in-time copy (FlashCopy) The required storage capacity for remote copy (Metro and Global Mirror) Per host: Storage capacity, the host logical unit number (LUN) quantity, and sizes 3. Define the local and remote SAN fabrics and clustered system systems, if a remote copy or a secondary site is needed. 4. Define the number of clustered system systems and the number of pairs of nodes (between 1 and 4) for each system. Each pair of nodes (an I/O Group) is the container for the volume. The number of necessary I/O Groups depends on the overall performance requirements. 5. Design the SAN according to the requirement for high availability and best performance. Consider the total number of ports and the bandwidth needed between the host and the SVC, the SVC and the disk subsystem, between the SVC nodes, and for the inter-switch link (ISL) between the local and remote fabric. 6. Design the iSCSI network according to the requirements for high availability and best performance. Consider the total number of ports and the bandwidth needed between the host and the SVC. 68
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 03 Planning and Configuration Massimo.fm

7. Determine the SVC service IP address and the IBM System Storage Productivity Center (SVC console). 8. Determine the IP addresses for the SVC system and for the host that is connected through iSCSI connections. 9. Define a naming convention for the SVC nodes, the host, and the storage subsystem. 10.Define the managed disks (MDisks) in the disk subsystem. 11.Define the Storage Pools. The Storage Pools depend on the disk subsystem in place and the data migration requirements. 12.Plan the logical configuration of the volume within the I/O Groups and the Storage Pools in such a way as to optimize the I/O load between the hosts and the SVC. 13.Plan for the physical location of the equipment in the rack. SVC planning can be categorized into two types: Physical planning Logical planning We describe these planning types in more detail in the following sections.

3.2 Physical planning


There are several key factors for you to consider when performing the physical planning of an SVC installation. The physical site must have the following characteristics: Power, cooling, and location requirements are present for the SVC and the uninterruptible power supply units. SVC nodes and their uninterruptible power supply units must be in the same rack. Place SVC nodes belonging to the same I/O Group in separate racks. Plan for two separate power sources if you have ordered a redundant AC power switch (available as an optional feature). An SVC node is one Electronic Industries Association (EIA) unit high. Each uninterruptible power supply unit that comes with SVC V6.3 is one EIA unit high. The uninterruptible power supply unit shipped with the earlier version of the SVC is two EIA units high. The optional IBM System Storage Productivity Center (SVC Console) is two EIA units high: one unit for the server and one unit for the keyboard and monitor. Other hardware devices can be in the same SVC rack, such as IBM System Storage DS4000, SAN switches, Ethernet switch, and other devices. Consider the maximum power rating of the rack; it must not be exceeded.

Chapter 3. Planning and configuration

69

7933 03 Planning and Configuration Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

3.2.1 Preparing your uninterruptible power supply unit environment


Ensure that your physical site meets the installation requirements for the uninterruptible power supply unit. Uninterruptible power supply unit: The 2145 UPS-1U is a Powerware 5115.

2145 UPS-1U
The 2145 Uninterruptible Power Supply-1U (2145 UPS-1U) is one EIA unit high and is shipped, and can only operate, on the following node types: SAN Volume Controller 2145-CG8 SAN Volume Controller 2145-CF8 SAN Volume Controller 2145-8A4 SAN Volume Controller 2145-8G4 SAN Volume Controller 2145-8F4 When configuring the 2145 UPS-1U, the voltage that is supplied to it must be 200 - 240 V, single phase. Tip: The 2145 UPS-1U has an integrated circuit breaker and does not require external protection.

3.2.2 Physical rules


The SVC must be installed in pairs to provide high availability, and each node in the clustered system must be connected to a separate uninterruptible power supply unit. Be aware of the following considerations: Each SVC node of an I/O Group must be connected to a separate uninterruptible power supply unit. Each uninterruptible power supply unit pair that supports a pair of nodes must be connected to a separate power domain (if possible) to reduce the chances of input power loss. The uninterruptible power supply units, for safety reasons, must be installed in the lowest positions in the rack. If necessary, move lighter units toward the top of the rack to make way for the uninterruptible power supply units. The power and serial connection from a node must be connected to the same uninterruptible power supply unit; otherwise, the node will not start. The 2145-CG8, 2145-CF8, 2145-8A4, 2145-8G4, and 2145-8F4 hardware models must be connected to a 5115 uninterruptible power supply unit. They will not start with a 5125 uninterruptible power supply unit. Important: Do not share the SVC uninterruptible power supply unit with any other devices. Figure 3-1 on page 71 shows a power cabling example for the 2145-CF8.

70

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 03 Planning and Configuration Massimo.fm

Figure 3-1 2145-CF8 power cabling

There are guidelines to follow for Fibre Channel (FC) cable connections. Occasionally, the introduction of a new SVC hardware model means that there are internal changes. One example is the worldwide port name (WWPN) mapping in the port mapping. The 2145-8A4, 2145-8G4, 2145-CF8 and 2145 CG8 have the same mapping. Figure 3-2 on page 72 shows the WWPN mapping.

Chapter 3. Planning and configuration

71

7933 03 Planning and Configuration Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 3-2 WWPN mapping

Figure 3-3 on page 73 shows a sample layout where nodes within each I/O Group have been split between separate racks. This protects against power failures and other events that only affect a single rack.

72

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 03 Planning and Configuration Massimo.fm

Figure 3-3 Sample rack layout

3.2.3 Cable connections


Create a cable connection table or documentation following your environments documentation procedure to track all of the connections that are required for the setup: Nodes Uninterruptible power supply unit Ethernet iSCSI connections FC ports IBM System Storage Productivity Center (SVC Console)

3.3 Logical planning


For logical planning, we cover these topics: Management IP addressing plan SAN zoning and SAN connections iSCSI IP addressing plan Back-end storage subsystem configuration SVC system configuration Split-cluster system configuration Storage Pool configuration
Chapter 3. Planning and configuration

73

7933 03 Planning and Configuration Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

Volume configuration Host mapping (LUN masking) Advanced Copy Service functions SAN boot support Data migration from non-virtualized storage subsystems SVC configuration backup procedure

3.3.1 Management IP addressing plan


For management, remember these rules: In addition to an FC connection, each node has an Ethernet connection for configuration and error reporting. Each SVC clustered system needs at least one IP address for management and one IP address per node to be used for service, with the new Service Assistant feature available starting with SVC 6.1. The service IP address is usable only from the no config node or when the SVC system is in service mode, and remember that service mode is a disruptive operation. Both IP addresses must be in the same IP subnet; see Example 3-1 on page 71.
Example 3-1 Management IP address sample

management IP add. 10.11.12.120 service IP add. 10.11.12.121 Each node in an SVC clustered system needs to have at least one Ethernet connection. Starting with SVC 6.1, the system management is performed through an embedded GUI running on the nodes. A separate console such as the traditional SVC Hardware Management Console (HMC) or IBM System Storage Productivity Center (SSPC) is no longer required to access the management interface. To access the management GUI you direct a web browser at the system management IP address. The clustered system must first be created specifying either an IPv4 or an IPv6 system address for port 1. After the clustered system is created, additional IP addresses can be created on port 1 and port 2 until both ports have an IPv4 and an IPv6 address defined. This allows the system to be managed on separate networks, which provides redundancy in the event of a network failure. Figure 3-4 on page 75 shows the IP configuration possibilities.

74

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 03 Planning and Configuration Massimo.fm

Figure 3-4 IP configuration possibilities

Support for iSCSI provides one additional IPv4 and one additional IPv6 address for each Ethernet port on every node. These IP addresses are independent of the clustered system configuration IP addresses. The SVC model 2145-CG8 can optionally have a SAS adapter with external ports disabled or a high speed 10 Gbps ethernet adapter with two ports, two additional IPv4 or IPv6 addresses are required. When accessing the SVC through the GUI or Secure Shell (SSH), choose one of the available IP addresses to connect to. There is no automatic failover capability so if one network is down, use an IP address on the alternate network. Clients may be able to use intelligence in domain name servers (DNS) to provide partial failover.

3.3.2 SAN zoning and SAN connections


SAN storage systems using the SVC can be configured with two, or up to eight, SVC nodes, arranged in an SVC clustered system. These SVC nodes are attached to the SAN fabric, along with disk subsystems and host systems. The SAN fabric is zoned to allow the SVCs to see each others nodes and the disk subsystems, and for the hosts to see the SVCs. The hosts are not able to directly see or operate LUNs on the disk subsystems that are assigned

Chapter 3. Planning and configuration

75

7933 03 Planning and Configuration Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

to the SVC system. The SVC nodes within an SVC system must be able to see each other and all of the storage that is assigned to the SVC system. The zoning capabilities of the SAN switch are used to create three distinct zones. SVC 6.3 supports 2 Gbps, 4 Gbps, or 8 Gbps FC fabric, depending on the hardware platform and on the switch where the SVC is connected. In an environment where you have a fabric with multiple speed switches, best practice is to connect the SVC and the disk subsystem to the switch operating at the highest speed. All SVC nodes in the SVC clustered system are connected to the same SANs, and they present volumes to the hosts. These volumes are created from Storage Pools that are composed of MDisks presented by the disk subsystems. There must be three distinct zones in the fabric. SVC clustered system zone: Create one zone per fabric with all of the SVC ports cabled to this fabric to allow SVC internode communication. Host zones: Create an SVC host zone for each server accessing storage from the SVC system. Storage zone: Create one SVC storage zone for each storage subsystem that is virtualized by the SVC.

Zoning considerations for Metro Mirror and Global Mirror


Ensure that you are familiar with the constraints for zoning a switch to support Metro Mirror and Global Mirror partnerships. SAN configurations that use intracluster Metro Mirror and Global Mirror relationships do not require additional switch zones. SAN configurations that use intercluster Metro Mirror and Global Mirror relationships require the following additional switch zoning considerations: For each node in a clustered system, zone exactly two Fibre Channel ports to exactly two Fibre Channel ports from each node in the partner clustered system. If dual-redundant ISLs are available, then split the two ports from each node evenly between the two ISLs. That is, exactly one port from each node should be zoned across each ISL. Local clustered system zoning continues to follow the standard requirement for all ports on all nodes in a clustered system to be zoned to one another. Attention: Failure to follow these configuration rules will expose the clustered system to the following condition and can result in loss of host access to volumes: If an intercluster link becomes severely and abruptly overloaded, the local Fibre Channel fabric can become congested to the extent that no Fibre Channel ports on the local SVC nodes are able to perform local intracluster heartbeat communication. This can, in turn, result in the nodes experiencing lease expiry events, in which a node will reboot to attempt to re-establish communication with the other nodes in the clustered system. If all nodes lease expire simultaneously, this can lead to a loss of host access to volumes for the duration of the reboot events.

Configure your SAN so that FC traffic can be passed between the two clustered systems. To configure the SAN this way, you can connect the clustered systems to the same SAN, merge the SANs, or use routing technologies. 76
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 03 Planning and Configuration Massimo.fm

Configure zoning to allow all of the nodes in the local fabric to communicate with all of the nodes in the remote fabric. Optionally, modify the zoning so that the hosts that are visible to the local clustered system can recognize the remote clustered system. This capability allows a host to have access to data in both the local and remote clustered systems. Verify that clustered system A cannot recognize any of the back-end storage that is owned by clustered system B. A clustered system cannot access logical units (LUs) that a host or another clustered system can also access. Figure 3-5 shows the SVC zoning topology.

Figure 3-5 SVC zoning topology

Figure 3-6 on page 78 shows an example of SVC, host, and storage subsystem connections.

Chapter 3. Planning and configuration

77

7933 03 Planning and Configuration Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 3-6 Example of SVC, host, and storage subsystem connections You must also observe the following additional guidelines: LUNs (MDisks) must have exclusive access to a single SVC clustered system and cannot be shared between other SVC clustered systems or hosts. A storage controller can present LUNs to both the SVC (as MDisks) and to other hosts in the SAN. However, in this case it is better to avoid having SVC and hosts share the same storage ports. Mixed port speeds are not permitted for intracluster communication. All node ports within a clustered system must be running at the same speed. ISLs are not to be used for intracluster node communication or node-to-storage controller access. The switch configuration in an SVC fabric must comply with the switch manufacturers configuration rules, which can impose restrictions on the switch configuration. For example, a switch manufacturer might limit the number of supported switches in a SAN. Operation outside of the switch manufacturers rules is not supported. Host bus adapters (HBAs) in dissimilar hosts or dissimilar HBAs in the same host need to be in separate zones. For example, if you have AIX and Microsoft hosts, they need to be in separate zones. In this case, dissimilar means that the hosts are running separate operating systems or are using separate hardware platforms. Therefore, various levels of the same operating system are regarded as similar. Note that this requirement is a SAN interoperability issue, rather than an SVC requirement. Host zones are to contain only one initiator (HBA) each, and as many SVC node ports as you need, depending on the high availability and performance that you want from your configuration.

78

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 03 Planning and Configuration Massimo.fm

Attention: Be aware of the following considerations. The use of ISLs for intracluster node communication can negatively impact the availability of the system due to the high dependency on the quality of these links to maintain heartbeat and other system management services. Therefore it is strongly advised that they only be used as part of an interim configuration to facilitate SAN migrations, and not be part of the architected solution. The use of ISLs for SVC node to storage controller access can lead to port congestion, which can negatively impact the performance and resiliency of the SAN. Therefore it is strongly advised that they only be used as part of an interim configuration to facilitate SAN migrations, and not be part of the architected solution. With SVC 6.3 you can use ISLs between nodes but they must be in a dedicated SAN, Virtual SAN (CISCO Technology), or Logical SAN (Brocade technology) The use of mixed port speeds used for intercluster communication can lead to port congestion, which can negatively impact the performance and resiliency of the SAN and is therefore not supported.

You can use the lsfabric command to generate a report that displays the connectivity between nodes and other controllers and hosts. This report is particularly helpful in diagnosing SAN problems.

Zoning examples
Figure 3-7 shows an SVC clustered system zoning example.

Figure 3-7 SVC clustered system zoning example

Figure 3-8 on page 80 shows a storage subsystem zoning example.

Chapter 3. Planning and configuration

79

7933 03 Planning and Configuration Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 3-8 Storage subsystem zoning example

Figure 3-9 shows a host zoning example.

Figure 3-9 Host zoning example

80

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 03 Planning and Configuration Massimo.fm

3.3.3 iSCSI IP addressing plan


SVC 6.3 supports host access through iSCSI (as an alternative to FC), and the following considerations apply: SVC uses the built-in Ethernet ports for iSCSI traffic, if the optional 10 Gbps Ethernet feature is installed you could connect host systems through the two 10 Gbps Ethernet ports per node. All node types, which can run SVC 6.1 or later, can use the iSCSI feature. SVC supports the Challenge Handshake Authentication Protocol (CHAP) authentication methods for iSCSI. iSCSI IP addresses can fail over to the partner node in the I/O Group if a node fails. This design reduces the need for multipathing support in the iSCSI host. iSCSI IP addresses can be configured for one or more nodes. iSCSI Simple Name Server (iSNS) addresses can be configured in the SVC. The iSCSI qualified name (IQN) for an SVC node will be: iqn.1986-03.com.ibm:2145.<cluster_name>.<node_name>. Because the IQN contains the clustered system name and the node name, it is important not to change these names after iSCSI is deployed. Each node can be given an iSCSI alias, as an alternative to the IQN. The IQN of the host to an SVC host object is added in the same way that you add FC WWPNs. Host objects can have both WWPNs and IQNs. Standard iSCSI host connection procedures can be used to discover and configure SVC as an iSCSI target. Next, we explain several ways in which you can configure SVC 6.1 or later. Figure 3-10 shows the use of IPv4 management and iSCSI addresses in the same subnet.

Figure 3-10 Use of IPv4 addresses Chapter 3. Planning and configuration

81

7933 03 Planning and Configuration Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

You can set up the equivalent configuration with only IPv6 addresses. Figure 3-11 shows the use of IPv4 management and iSCSI addresses in two separate subnets.

Figure 3-11 IPv4 address plan with two subnets

Figure 3-12 shows the use of redundant networks.

Figure 3-12 Redundant networks

Figure 3-13 on page 83 shows the use of a redundant network and a third subnet for management. 82
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 03 Planning and Configuration Massimo.fm

Figure 3-13 Redundant network with third subnet for management

Figure 3-14 shows the use of a redundant network for both iSCSI data and management.

Figure 3-14 Redundant network for iSCSI and management

Be aware of these considerations: All of the examples are valid using IPv4 and IPv6 addresses. It is valid to use IPv4 addresses on one port and IPv6 addresses on the other port. It is valid to have separate subnet configurations for IPv4 and IPv6 addresses.

Chapter 3. Planning and configuration

83

7933 03 Planning and Configuration Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

3.3.4 Back-end storage subsystem configuration


Back-end storage subsystem configuration planning must be applied to all storage controllers attached to SVC. Refer to the following website for a list of currently supported storage subsystems: http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html Apply the following general guidelines for back-end storage subsystem configuration planning: In the SAN, storage controllers that are used by the SVC clustered system must be connected through SAN switches. Direct connection between the SVC and storage controller is not supported. Multiple connections are allowed from the redundant controllers in the disk subsystem to improve data bandwidth performance. It is not mandatory to have a connection from each redundant controller in the disk subsystem to each counterpart SAN, but it is a best practice. Therefore, controller A in the DS4000 can be connected to SAN A only, or to SAN A and SAN B, and controller B in the DS4000 can be connected to SAN B only, or to SAN B and SAN A. Split controller configurations are supported with certain rules and configuration guidelines. See 3.3.6, Split-cluster system configuration on page 87 for more information. All SVC nodes in an SVC clustered system must be able to see the same set of ports from each storage subsystem controller, violating this guideline will cause paths to become degraded. This degradation can occur as a result of applying inappropriate zoning and LUN masking. This guideline has important implications for a disk subsystems such as DS3000, DS4000, or DS5000, which impose exclusivity rules regarding which HBA WWPNs a storage partition can be mapped to. Notes: Starting with SVC 6.1 provides for better load distribution across paths within storage pools. In previous code levels, the path to MDisk assignment was made in a round-robin fashion across all MDisks configured to the clustered system. With that method no attention is paid to how MDisks within storage pools are distributed across paths and therefore it is possible and even likely to have certain paths be more heavily loaded than others. This condition is even more likely to occur with a smaller number of MDisks contained in the storage pool. Starting with SVC 6.1 the code contains logic that considers MDisks within storage pools and more effectively distributes their active paths based on the storage controller ports available. The Detect MDisk commands need to be run following the creation or modification (add or remove MDisk) of storage pools for paths to be redistributed. If you do not have a storage susbsystem that supports the SVC round-robin algorithm, then to ensure sufficient bandwidth to the storage controller and an even balance across storage controller ports, the number of MDisks per storage pool is to be a multiple of the number of storage ports available.

84

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 03 Planning and Configuration Massimo.fm

In general, configure disk subsystems as though there is no SVC. However, we suggest the following specific guidelines: Disk drives Exercise caution with large disk drives so that you do not have too few spindles to handle the load. Using RAID-5 is suggested for the vast majority of workloads. Array sizes 8+P or 4+P is suggested for the DS4000 and DS5000 families, if possible. Use the DS4000 segment size of 128 KB or larger to help the sequential performance. Upgrade to EXP810 drawers, if possible. Create LUN sizes that are equal to the RAID array and rank size (if the array size is >2 TB and the disk subsystem does not support greater than 2 TB MDisks then create the minimum number of equal size LUNs). When adding more disks to a subsystem, consider adding the new MDisks to existing Storage Pools versus creating additional small Storage Pools. Scripts are available to restripe volume extents evenly across all MDisks in the Storage Pools if required. Go to the website https://www.ibm.com/developerworks/mydeveloperworks/groups/service/html/comm unityview?communityUuid=5cca19c3-f039-4e00-964a-c5934226abc1 and search for svctools. Maximum of 1024 worldwide node names (WWNNs) per cluster EMC DMX/SYMM, all HDS, and SUN/HP HDS clones use one WWNN per port. Each WWNN appears as a separate controller to the SVC. IBM, EMC Clariion, and HP use one WWNN per subsystem. Each WWNN appears as a single controller with multiple ports/WWPNs, for a maximum of 16 ports/WWPNs per WWNN. DS8000 using four of, or eight of, the 4 port HA cards Use port 1 and 3 or 2 and 4 on each card (does not matter for 8 Gb cards). This setup provides 8 or 16 ports for SVC use. Use 8 ports minimum up to 40 ranks. Use 16 ports, which is the maximum, for 40 or more ranks. DS4000/DS5000 EMC CLARiiON/CX Both systems have the preferred controller architecture, and SVC supports this configuration. Use a minimum of 4 ports, and preferably 8 or more ports up to maximum of 16 ports, so that more ports equate to more concurrent I/O that is driven by the SVC. Support for mapping controller A ports to Fabric A and controller B ports to Fabric B or cross-connecting ports to both fabrics from both controllers. The cross-connecting approach is preferred to avoid AVT/Trespass occurring if a fabric or all paths to a fabric fail. DS3400 Use a minimum of 4 ports.

Chapter 3. Planning and configuration

85

7933 03 Planning and Configuration Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

XIV requirements and restrictions The use of XIV extended functions including snaps, thin-provisioning, synchronous replication, and LUN expansion is not LUNs presented to the SVC is not supported. A maximum of 511 LUNs from one XIV system can be mapped to an SVC clustered system. Full 15 module XIV recommendations 161 TB usable Use two interface host ports from each of the six interface modules. Use ports 1 and 3 from each interface module and zone these 12 ports with all SVC node ports. Create 48 LUNs of equal size, each of which is a multiple of 17 GB, and you will get 1632 GB approximately if using the entire full frame XIV with the SVC. Map LUNs to the SVC as 48 MDisks, and add all of them to the one XIV Storage Pool so that the SVC will drive the I/O to four MDisks/LUNs for each of the 12 XIV FC ports. This design provides a good queue depth on the SVC to drive XIV adequately. Six module XIV recommendations - 55 TB usable Use two interface host ports from each of the two active interface modules. Use ports 1 and 3 from interface modules 4 and 5. (Interface module 6 is inactive.) And, zone these four ports with all SVC node ports. Create 16 LUNs of equal size, each of which is a multiple of 17 GB, and you will get 1632 GB approximately if using the entire XIV with the SVC. Map LUNs to the SVC as 16 MDisks, and add all of them to the one XIV Storage Pool that the SVC will drive I/O to four MDisks/LUNs per each of the four XIV FC ports. This design provides a good queue depth on the SVC to drive XIV adequately. Nine module XIV recommendations - 87 TB usable: Use two interface host ports from each of the four active interface modules. Use ports 1 and 3 from interface modules 4, 5, 7, and 8. (Interface modules 6 and 9 are inactive.) Also, zone these eight ports with all of the SVC node ports. Create 26 LUNs of equal size, each of which is a multiple of 17 GB, and you will get 1632 GB approximately if using the entire XIV with the SVC. Map LUNs to SVC as 26 MDisks, and map add all of them to the one XIV Storage Pool, so that the SVC will drive I/O to three MDisks/LUNs on each of six ports and four MDisks/LUNs on the other two XIV FC ports. This design provides a useful queue depth on SVC to drive XIV adequately. Configure XIV host connectivity for the SVC clustered system: Create one host definition on XIV, and include all SVC node WWPNs. You can create clustered system host definitions (one per I/O Group), but the preceding method is easier. Map all LUNs to all SVC node WWPNs.

3.3.5 SVC clustered system configuration


To ensure high availability in SVC installations, consider the following guidelines when you design a SAN with the SVC. All nodes in a clustered system must be in the same LAN segment, because the nodes in the clustered system must be able to assume the same clustered system, or service IP, address. Make sure that the network configuration will allow any of the nodes to use these 86
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 03 Planning and Configuration Massimo.fm

IP addresses. Note that if you plan to use the second Ethernet port on each node, it is possible to have two LAN segments. However, port 1 of every node must be in one LAN segment, and port 2 of every node must be in the other LAN segment. To maintain application uptime in the unlikely event of an individual SVC node failing, SVC nodes are always deployed in pairs (I/O Groups). If a node fails or is removed from the configuration, the remaining node operates in a degraded mode, but it is still a valid configuration. The remaining node operates in write-through mode, meaning that the data is written directly to the disk subsystem (the cache is disabled for the write). The uninterruptible power supply unit must be in the same rack as the node to which it provides power, and each uninterruptible power supply unit can only have one node connected. The FC SAN connections between the SVC node and the switches are optical fiber. These connections can run at either 2 Gbps, 4 Gbps, or 8 Gbps, depending on your SVC and switch hardware. The 2145-CG8, 2145-CF8, 2145-8A4, 2145-8G4, and 2145-8F4 SVC nodes auto-negotiate the connection speed with the switch. The SVC node ports must be connected to the FC fabric only. Direct connections between the SVC and the host, or the disk subsystem, are unsupported. Two SVC clustered systems cannot have access to the same LUNs within a disk subsystem. Configuring zoning such that two SVC clustered systems have access to the same LUNs (MDisks) can, and will likely, result in data corruption. The two nodes within an I/O Group can be co-located (within the same set of racks) or can be located in separate racks and separate rooms. See 3.3.6, Split-cluster system configuration on page 87 for more information about this topic. The SVC uses three MDisks as quorum disks for the clustered system. A best practice for redundancy is to have each quorum disk be located in a separate storage subsystem where possible. The current locations of the quorum disks can be displayed using the lsquorum command and relocated using the chquorum command.

3.3.6 Split-cluster system configuration


A split-cluster system configuration (also referred to as a split I/O Group) can be implemented as a high availability option. With SVC 6.3 two split cluster system configurations are supported: 1. No ISL configuration: a. Passive wave division multiplexing (WDM) devices can be used between both sites b. No ISLs between SVC nodes (similar to SVC 5.1 supported configurations) c. Distance supported is up to 40 km Figure 3-15 on page 88 shows an example of Split Cluster configuration with no ISL configuration.

Chapter 3. Planning and configuration

87

7933 03 Planning and Configuration Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 3-15 Split Cluster with NO ISL Configuration

2. ISL configuration. a. b. c. d. ISLs between SVC nodes Maximum distance similar to Metro Mirror distances Physical requirements similar to Metro Mirror requirements ISL distance extension with active and passive WDM devices

Figure 3-16 on page 88 shows an example of Split Cluster with ISL Configuration.

Figure 3-16 Split Cluster with ISL Configuration

88

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 03 Planning and Configuration Massimo.fm

Use the split-cluster system configuration in conjunction with the volume mirroring option to realize an availability benefit. After volume mirroring has been configured, use the lscontrollerdependentvdisks command to validate volume mirrors reside on separate storage controllers. This will ensure that access to volumes is maintained in the event of the loss of a storage controller. When implementing a split-cluster system configuration, two of the three quorum disks can be co-located in the same room where the SVC nodes are located. However, the active quorum disk must reside in a separate room. This configuration ensures that a quorum disk is always available, even after a single site failure. For split-cluster system configuration, configure the SVC as follows: Site 1: Half of the SVC clustered system nodes + one quorum disk candidate Site 2: Half of the SVC clustered system nodes + one quorum disk candidate Site 3: Active Quorum disk When a Split Cluster configuration is used in conjunction with volume mirroring, this configuration provides a high availability solution that is tolerant of a failure at a single site. If either the primary or secondary site fails, the remaining sites can continue performing I/O operations. See Appendix C, SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines on page 899 for more information about Split Cluster Configurations.

3.3.7 Storage Pool configuration


The Storage Pool is at the center of the many-to-many relationship between the MDisks and the volumes. It acts as a container from which managed disks contribute chunks of physical disk capacity known as extents, and from which volumes are created. MDisks in the SVC are LUNs assigned from the underlying disk subsystems to the SVC and can be either managed or unmanaged. A managed MDisk is an MDisk that is assigned to a Storage Pool: A Storage Pool is a collection of MDisks. An MDisk can only be contained within a single Storage Pool. An SVC supports up to 128 Storage Pools. There is no limit to the number of volumes that can be allocated from a Storage Pool, however there is a limit an I/O Group limit of 2048 and clustered system limit of 8192. Volumes are associated with a single Storage Pool with exception cases where a volume is being migrated or mirrored between Storage Pools. SVC supports extent sizes of 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, and 8192 MB. Note that support for extent sizes 4096 and 8192 was added in SVC 6.1. The extent size is a property of the Storage Pool and is set when the Storage Pool is created. All MDisks in the Storage Pool will have the same extent size, as will all volumes allocated from the Storage Pool. The extent size of a Storage Pool cannot be changed. If a different extent size is desired, the Storage Pool will need to be deleted and a new Storage Pool configured. Table 3-1 lists all of the extent sizes that are available in an SVC.

Chapter 3. Planning and configuration

89

7933 03 Planning and Configuration Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

Table 3-1 Extent size and maximum clustered system capacities Extent size 16 MB 32 MB 64 MB 128 MB 256 MB 512 MB 1,024 MB 2,048 MB 4,096 MB 8,192 MB Maximum clustered system capacity 64 TB 128 TB 256 TB 512 TB 1 PB 2 PB 4 PB 8 PB 16 PB 32 PB

There are several additional Storage Pool considerations: Maximum clustered system capacity is related to the extent size. 16 MB extent = 64 TB and doubles for each increment in extent size; for example, 32 MB = 128 TB. We strongly advise a minimum 128/256 MB. The IBM Storage Performance Council (SPC) benchmarks used a 256 MB extent. Pick the extent size and use that size for all Storage Pools. You cannot migrate volumes between Storage Pools with different extent sizes. However, you can use volume mirroring to create copies between Storage Pools with different extent sizes. Storage Pool reliability, availability, and serviceability (RAS) considerations. It might make sense to create multiple Storage Pools if you ensure a host only gets its volumes built from one of the Storage Pools. If the Storage Pool goes offline, it impacts only a subset of all of the hosts using the SVC. However, creating multiple Storage Pools can cause a high number of Storage Pools, approaching the SVC limits. If you do not isolate hosts to Storage Pools, create one large Storage Pool. Creating one large Storage Pool assumes that the physical disks are all the same size, speed, and RAID level. The Storage Pool goes offline if an MDisk is unavailable, even if the MDisk has no data on it. Do not put MDisks into a Storage Pool until needed. Create at least one separate Storage Pool for all the image mode volumes. Make sure that the LUNs that are given to the SVC have any host persistent reserves removed. Storage Pool performance considerations. It might make sense to create multiple Storage Pools if you are attempting to isolate workloads to separate disk spindles. Storage Pools with too few MDisks cause an MDisk overload, so it is better to have more spindle counts in a Storage Pool to meet workload requirements.

90

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 03 Planning and Configuration Massimo.fm

The Storage Pool and SVC cache relationship. SVC employs cache partitioning to limit the potentially negative effect that a poorly performing storage controller can have on the clustered system. The partition allocation size is defined based on the number of Storage Pools configured. This design protects against individual controller overloading and failures from consuming write cache and degrading performance of other Storage Pools in the clustered system. More details are discussed in 2.8.3, Cache on page 41. Table 3-2 shows the limit of the write cache data.
Table 3-2 Limit of the cache data Number of Storage Pools 1 2 3 4 5 or more Upper limit 100% 66% 40% 30% 25%

Consider the rule to be that no single partition can occupy more than its upper limit of cache capacity with write data. These limits are upper limits, and they are the points at which the SVC cache will start to limit incoming I/O rates for volumes created from the Storage Pool. If a particular partition reaches this upper limit, the net result is the same as a global cache resource that is full. That is, the host writes will be serviced on a one-out-one-in basis, because the cache destages writes to the back-end disks. However, only writes targeted at the full partition are limited. All I/O destined for other (non-limited) Storage Pools will continue as normal. Read I/O requests for the limited partition will also continue as normal. However, because the SVC is destaging write data at a rate that is obviously greater than the controller can sustain (otherwise the partition does not reach the upper limit), read response times are also likely to be impacted.

3.3.8 Virtual disk configuration


An individual virtual disk (volume) is a member of one Storage Pool and one I/O Group. When creating a volume you first identify the desired performance, availability, and cost requirements for that volume, and then select the Storage Pool accordingly. The Storage Pool defines which MDisks provided by the disk subsystem make up the volume. The I/O Group (two nodes make an I/O Group) defines which SVC nodes provide I/O access to the volume. Note: There is no fixed relationship between I/O Groups and Storage Pools. Perform volume allocation based on the following considerations: Optimize performance between the hosts and the SVC by attempting to distribute volumes evenly across available I/O Groups and nodes within the clustered system. Reach the level of performance, reliability, and capacity you require by using the Storage Pool that corresponds to your needs (you can access any Storage Pool from any node). That is, choose the Storage Pool that fulfills the demands for your volumes with respect to performance, reliability, and capacity.
Chapter 3. Planning and configuration

91

7933 03 Planning and Configuration Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

I/O Group considerations When you create a volume, it is associated with one node of an I/O Group. By default, every time that you create a new volume, it is associated with the next node using a round-robin algorithm. You can specify a preferred access node, which is the node through which you send I/O to the volume instead of using the round-robin algorithm. A volume is defined for an I/O Group. Even if you have eight paths for each volume, all I/O traffic flows only toward one node (the preferred node). Therefore, only four paths are really used by the IBM Subsystem Device Driver (SDD). The other four paths are used only in the case of a failure of the preferred node or when concurrent code upgrade is running. Creating image mode volumes Use image mode volumes when an MDisk already has data on it, from a non-virtualized disk subsystem. When an image mode volume is created, it directly corresponds to the MDisk from which it is created. Therefore, volume logical block address (LBA) x = MDisk LBA x. The capacity of image mode volumes defaults to the capacity of the supplied MDisk. When you create an image mode disk, the MDisk must have a mode of unmanaged and therefore does not belong to any Storage Pool. A capacity of 0 is not allowed. Image mode volumes can be created in sizes with a minimum granularity of 512 bytes, and they must be at least one block (512 bytes) in size. Creating managed mode volumes with sequential or striped policy When creating a managed mode volume with sequential or striped policy, you must use a number of MDisks containing extents that are free and of a size that is equal or greater than the size of the volume that you want to create. There might be sufficient extents available on the MDisk, but there might not be a contiguous block large enough to satisfy the request. Thin-Provisioned volume considerations When creating the Thin-Provisioned volume, you need to understand the utilization patterns by the applications or group users accessing this volume. You must take into consideration items such as the actual size of the data, the rate of creation of new data, modifying or deleting existing data, and so on. There are two operating modes for Thin-Provisioned volumes

Autoexpand volumes allocate storage from a Storage Pool on demand with minimal
user intervention required. However, a misbehaving application can cause a volume to expand until it has consumed all of the storage in a Storage Pool.

Non-autoexpand volumes have a fixed amount of storage assigned. In this case, the
user must monitor the volume and assign additional capacity when required. A misbehaving application can only cause the volume that it is using to fill up.

Depending on the initial size for the real capacity, the grain size and a warning level can be set. If a volume goes offline, either through a lack of available physical storage for autoexpand, or because a volume marked as non-expand had not been expanded in time, there is a danger of data being left in the cache until storage is made available. This situation is not a data integrity or data loss issue, but you must not rely on the SVC cache as a backup storage mechanism.

92

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 03 Planning and Configuration Massimo.fm

Important: Keep a warning level on the used capacity so that it provides adequate time to respond and provision more physical capacity. Warnings must not be ignored by an administrator. Use the autoexpand feature of the Thin-Provisioned volumes. The grain size allocation unit for the real capacity in the volume can be set as 32 KB, 64 KB, 128 KB, or 256 KB. A smaller grain size utilizes space more effectively, but it results in a larger directory map, which can reduce performance. Thin-Provisioned volumes require more I/Os because of directory accesses. For truly random workloads with 70% read and 30% write, a Thin-Provisioned volume will require approximately one directory I/O for every user I/O. The directory is two-way write-back-cached (just like the SVC fastwrite cache), so certain applications will perform better. Thin-Provisioned volumes require more CPU processing, so the performance per I/O Group can also be reduced. A Thin-Provisioned volume feature called zero detect provides clients with the ability to reclaim unused allocated disk space (zeros) when converting a fully allocated volume to a Thin-Provisioned volume using volume mirroring. Volume mirroring guidelines Create or identify two separate Storage Pools to allocate space for your mirrored volume. Allocate the Storage Pools containing the mirrors from separate storage controllers. If possible, use a Storage Pool with MDisks that share the same characteristics. Otherwise, the volume performance can be affected by the poorer performing MDisk.

3.3.9 Host mapping (LUN masking)


For the host and application servers, the following guidelines apply: Each SVC node presents a volume to the SAN through four ports. Because two nodes are used in normal operations to provide redundant paths to the same storage, a host with two HBAs can see multiple paths to each LUN that is presented by the SVC. Use zoning to limit the pathing from a minimum of two paths to the maximum available of eight paths, depending on the kind of high availability and performance that you want to have in your configuration. It is best to use zoning to limit the pathing to four paths. The hosts must run a multipathing device driver to resolve this back to a single device. The multipathing driver supported and delivered by SVC is the IBM Subsystem Device Driver (SDD). Native multipath I/O (MPIO) drivers on selected hosts are supported. For operating system-specific information about MPIO support, see this website: http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html The number of paths to a volume from a host to the nodes in the I/O Group that owns the volume must not exceed eight, even if eight is not the maximum number of paths supported by the multipath driver (SDD supports up to 32). To restrict the number of paths to a host volume, the fabrics must be zoned so that each host FC port is zoned to no more than two ports from each SVC node in the I/O Group that owns the volume.

Chapter 3. Planning and configuration

93

7933 03 Planning and Configuration Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

Notes: Following is a list of the suggested number of paths per volume: (n+1 redundancy) With 2 HBA ports: zone HBA ports to SVC ports 1 to 2 for a total of 4 paths With 4 HBA ports: zone HBA ports to SVC ports 1 to 1 for a total of 4 paths Optional: (n+2 redundancy) With 4 HBA ports: zone HBA ports to SVC ports 1 to 2 for a total of 8 paths The term HBA port is used to describe the SCSI Initiator. The term SVC port is used to describe the SCSI target. The maximum number of host paths per volume is not to exceed 8. If a host has multiple HBA ports, each port must be zoned to a separate set of SVC ports to maximize high availability and performance. To configure greater than 256 hosts, you will need to configure the host to I/O Group mappings on the SVC. Each I/O Group can contain a maximum of 256 hosts, so it is possible to create 1024 host objects on an eight-node SVC clustered system. Volumes can only be mapped to a host that is associated with the I/O Group to which the volume belongs. Port masking You can use a port mask to control the node target ports that a host can access, which satisfies two requirements: As part of a security policy, to limit the set of WWPNs that are able to obtain access to any volumes through a given SVC port As part of a scheme to limit the number of logins with mapped volumes visible to a host multipathing driver (such as SDD) and thus limit the number of host objects configured without resorting to switch zoning The port mask is an optional parameter of the mkhost and chhost commands. The port mask is four binary bits. Valid mask values range from 0000 (no ports enabled) to 1111 (all ports enabled). For example, a mask of 0011 enables port 1 and port 2. The default value is 1111 (all ports enabled). The SVC supports connection to the Cisco MDS family and Brocade family. See the following website for the latest support information: http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html

3.3.10 Advanced Copy Services


The SVC offers these Advanced Copy Services: FlashCopy Metro Mirror Global Mirror

94

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 03 Planning and Configuration Massimo.fm

Note: SVC 6.3 introduces a new property for the clustered system called layer. This property is used when there is a copy services partnership between an SVC and an IBM Storwize V7000. There are two layers: replication and storage. All SVC clustered systems are replication layer and cannot be changed, by default IBM Storwize V7000 is a storage layer and must be changed with the CLI command chsystem before use to make any copy services partnership with SVC. SVC Advanced Copy Services must apply the following guidelines.

FlashCopy guidelines
Consider these FlashCopy guidelines: Identify each application that must have a FlashCopy function implemented for its volume. FlashCopy is a relationship between volumes. Those volumes can belong to separate Storage Pools and separate storage subsystems. You can use FlashCopy for backup purposes by interacting with the Tivoli Storage Manager Agent, or for cloning a particular environment. Define which FlashCopy best fits your requirements: No copy, Full copy, Thin-Provisioned, or Incremental. Define which FlashCopy rate best fits your requirement in terms of performance and time to get the FlashCopy completed. The relationship of the background copy rate value to the attempted number of grains to be split per second is shown in Table 3-3. Define the grain size that you want to use. A grain is the unit of data represented by a single bit in the FlashCopy bitmap table. Larger grain sizes can cause a longer FlashCopy elapsed time and a higher space usage in the FlashCopy target volume. Smaller grain sizes can have the opposite effect. Remember that the data structure and the source data location can modify those effects. In an actual environment, check the results of your FlashCopy procedure in terms of the data copied at every run and in terms of elapsed time, comparing them to the new SVC FlashCopy results. Eventually, adapt the grain/second and the copy rate parameter to fit your environments requirements.
Table 3-3 Grain splits per second User percentage 1 - 10 11 - 20 21 - 30 31 - 40 41 - 50 51 - 60 61 - 70 71 - 80 81 - 90 Data copied per second 128 KB 256 KB 512 KB 1 MB 2 MB 4 MB 8 MB 16 Mb 32 MB 256 KB grain per second 0.5 1 2 4 8 16 32 64 128 64 KB grain per second 2 4 8 16 32 64 128 256 512

Chapter 3. Planning and configuration

95

7933 03 Planning and Configuration Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

User percentage 91 - 100

Data copied per second 64 MB

256 KB grain per second 256

64 KB grain per second 1024

Metro Mirror and Global Mirror guidelines


SVC supports both intracluster and intercluster Metro Mirror and Global Mirror. From the intracluster point of view, any single clustered system is a reasonable candidate for a Metro Mirror or Global Mirror operation. Intercluster operation, however, needs at least two clustered systems that are separated by a number of moderately high bandwidth links. Figure 3-17 shows a schematic of Metro Mirror connections.

Figure 3-17 Metro Mirror connections

Figure 3-17 contains two redundant fabrics. Part of each fabric exists at the local clustered system and at the remote clustered system. There is no direct connection between the two fabrics. Technologies for extending the distance between two SVC clustered systems can be broadly divided into two categories: FC extenders SAN multiprotocol routers Due to the more complex interactions involved, IBM explicitly tests products of this class for interoperability with the SVC. The current list of supported SAN routers can be found in the supported hardware list on the SVC support website: http://www.ibm.com/storage/support/2145 IBM has tested a number of FC extenders and SAN router technologies with the SVC, which must be planned, installed, and tested so that the following requirements are met: The round-trip latency between sites must not exceed 80 ms (40 ms one-way). For Global Mirror, this limit allows a distance between the primary and secondary sites of up to 96
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 03 Planning and Configuration Massimo.fm

8000 km (4970.96 miles) using a planning assumption of 100 km (62.13 miles) per 1 ms of round-trip link latency. The latency of long distance links depends upon the technology that is used to implement them. A point-to-point dark fiber-based link will typically provide a round-trip latency of 1ms per 100 km (62.13 miles) or better. Other technologies will provide longer round-trip latencies, which will affect the maximum supported distance. The configuration must be tested with the expected peak workloads. When Metro Mirror or Global Mirror is used, a certain amount of bandwidth will be required for SVC intercluster heartbeat traffic. The amount of traffic depends on how many nodes are in each of the two clustered systems. Figure 3-18 shows the amount of heartbeat traffic, in megabits per second, that is generated by various sizes of clustered systems.

Figure 3-18 Amount of heartbeat traffic

These numbers represent the total traffic between the two clustered systems when no I/O is taking place to mirrored volumes. Half of the data is sent by one clustered system, and half of the data is sent by the other clustered system. The traffic will be divided evenly over all available intercluster links. Therefore, if you have two redundant links, half of this traffic will be sent over each link during fault-free operation. The bandwidth between sites must, at the least, sized to meet the peak workload requirements in addition to maintaining the maximum latency specified previously. The peak workload requirement must be evaluated by considering the average write workload over a period of one minute or less, plus the required synchronization copy bandwidth. With no synchronization copies active and no write I/O disks in Metro Mirror or Global Mirror relationships, the SVC protocols will operate with the bandwidth indicated in Figure 3-18. However, the true bandwidth required for the link can only be determined by considering the peak write bandwidth to volumes participating in Metro Mirror or Global Mirror relationships and adding to it the peak synchronization copy bandwidth. If the link between the sites is configured with redundancy so that it can tolerate single failures, the link must be sized so that the bandwidth and latency statements continue to be true even during single failure conditions. The configuration is tested to simulate the failure of the primary site (to test the recovery capabilities and procedures), including eventual failback to the primary site from the secondary. The configuration must be tested to confirm that any failover mechanisms in the intercluster links interoperate satisfactorily with the SVC. The FC extender must be treated as a normal link. The bandwidth and latency measurements must be made by, or on behalf of, the client. They are not part of the standard installation of the SVC by IBM. Make these

Chapter 3. Planning and configuration

97

7933 03 Planning and Configuration Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

measurements during installation, and record the measurements. Testing must be repeated following any significant changes to the equipment providing the intercluster link.

Global Mirror guidelines


Consider these guidelines: When using SVC Global Mirror, all components in the SAN must be capable of sustaining the workload generated by application hosts and the Global Mirror background copy workload. Otherwise, Global Mirror can automatically stop your relationships to protect your application hosts from increased response times. Therefore, it is important to configure each component correctly. Use a SAN performance monitoring tool, such as IBM System Storage Productivity Center, which will allow you to continuously monitor the SAN components for error conditions and performance problems. This tool will help you detect potential issues before they impact your disaster recovery solution. The long-distance link between the two clustered systems must be provisioned to allow for the peak application write workload to the Global Mirror source volumes, plus the client-defined level of background copy. The peak application write workload should ideally be determined by analyzing the SVC performance statistics. Statistics must be gathered over a typical application I/O workload cycle, which might be days, weeks, or months depending on the environment on which the SVC is used. These statistics must be used to find the peak write workload that the link must be able to support. Characteristics of the link can change with use; for example, latency can increase as the link is used to carry an increased bandwidth. The user must be aware of the links behavior in such situations and ensure that the link remains within the specified limits. If the characteristics are not known, testing must be performed to gain confidence of the links suitability. Users of Global Mirror must consider how to optimize the performance of the long-distance link, which will depend upon the technology that is used to implement the link. For example, when transmitting FC traffic over an IP link, it can be desirable to enable jumbo frames to improve efficiency. Using Global Mirror and Metro Mirror between the same two clustered systems is supported. Using Global Mirror and Metro Mirror between SVC Clustered system and IBM Storwize V7000 with a minimum code level of 6.3 is supported. It is supported for cache-disabled volumes to participate in a Global Mirror relationship; however, it not a best practice to do so. The gmlinktolerance parameter of the remote copy partnership must be set to an appropriate value. The default value is 300 seconds (5 minutes), which will be appropriate for most clients. During SAN maintenance, the user must choose to: reduce the application I/O workload for the duration of the maintenance (so that the degraded SAN components are capable of the new workload); disable the gmlinktolerance feature; increase the gmlinktolerance value (meaning that application hosts might see extended response times from Global Mirror volumes); or stop the Global Mirror relationships. If the gmlinktolerance value is increased for maintenance lasting x minutes, it must only be reset to the normal value x minutes after the end of the maintenance activity.

98

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 03 Planning and Configuration Massimo.fm

If gmlinktolerance is disabled for the duration of the maintenance, it must be re-enabled after the maintenance is complete. Global Mirror volumes must have their preferred nodes evenly distributed between the nodes of the clustered systems. Each volume within an I/O Group has a preferred node property that can be used to balance the I/O load between nodes in that group. Figure 3-19 shows the correct relationship between volumes in a Metro Mirror or Global Mirror solution.

Figure 3-19 Correct volume relationship

The capabilities of the storage controllers at the secondary clustered system must be provisioned to allow for the peak application workload to the Global Mirror volumes, plus the client-defined level of background copy, plus any other I/O being performed at the secondary site. The performance of applications at the primary clustered system can be limited by the performance of the back-end storage controllers at the secondary clustered system to maximize the amount of I/O that applications can perform to Global Mirror volumes. Do a complete review before using SATA for Metro Mirror or Global Mirror secondary volumes. Using a slower disk subsystem for the secondary volumes for high performance primary volumes can mean that the SVC cache might not be able to buffer all the writes, and flushing cache writes to SATA might slow I/O at the production site. Storage controllers must be configured to support the Global Mirror workload that is required of them. You can: dedicate storage controllers to only Global Mirror volumes; configure the controller to guarantee sufficient quality of service for the disks being used by Global Mirror; or ensure that physical disks are not shared between Global Mirror volumes and other I/O (for example, by not splitting an individual RAID array). MDisks within a Global Mirror storage pool must be similar in their characteristics (for example, RAID level, physical disk count, and disk speed). This requirement is true of all storage pools, but it is particularly important to maintain performance when using Global Mirror. When a consistent relationship is stopped, for example, by a persistent I/O error on the intercluster link, the relationship enters the consistent_stopped state. I/O at the primary site continues, but the updates are not mirrored to the secondary site. Restarting the relationship will begin the process of synchronizing new data to the secondary disk. While this synchronization is in progress, the relationship will be in the inconsistent_copying state. Therefore, the Global Mirror secondary volume will not be in a usable state until the copy has completed and the relationship has returned to a Consistent state. For this
Chapter 3. Planning and configuration

99

7933 03 Planning and Configuration Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

reason it is highly advisable to create a FlashCopy of the secondary volume before restarting the relationship. When started, the FlashCopy will provide a consistent copy of the data, even while the Global Mirror relationship is copying. If the Global Mirror relationship does not reach the Synchronized state (if, for example, the intercluster link experiences further persistent I/O errors), the FlashCopy target can be used at the secondary site for disaster recovery purposes. If you are planning to use an FCIP intercluster link, it is extremely important to design and size the pipe correctly. Example 3-2 shows a best-guess bandwidth sizing formula.
Example 3-2 WAN link calculation example

Amount of write data within 24 hours times 4 to allow for peaks Translate into MB/s to determine WAN link needed Example: 250 GB a day 250 GB * 4 = 1 TB 24 hours * 3600 secs/hr. = 86400 secs 1,000,000,000,000/ 86400 = approximately 12 MB/s Which means OC3 or higher is needed (155 Mbps or higher) If compression is available on routers or WAN communication devices, smaller pipelines might be adequate. Note that workload is probably not evenly spread across 24 hours. If there are extended periods of high data change rates, consider suspending Global Mirror during that time frame. If the network bandwidth is too small to handle the traffic, the application write I/O response times might be elongated. For the SVC, Global Mirror must support short-term Peak Write bandwidth requirements. Remember that SVC Global Mirror is much more sensitive to a lack of bandwidth than the DS8000. You will need to consider the initial sync and re-sync workload, as well. The Global Mirror partnerships background copy rate must be set to a value that is appropriate to the link and secondary back-end storage. The more bandwidth that you give to the sync and re-sync operation, the less workload can be delivered by the SVC for the regular data traffic. Do not propose Global Mirror if the data change rate will exceed the communication bandwidth or if the round-trip latency exceeds 80 - 120 ms. Greater than 80 ms round-trip latency requires SCORE/RPQ submission.

3.3.11 SAN boot support


The SVC supports SAN boot or startup for AIX, Windows Server 2003, and other operating systems. SAN boot support can change from time to time, so check the following website regularly: http://www.ibm.com/systems/storage/software/virtualization/svc/interop.html

3.3.12 Data migration from a non-virtualized storage subsystem


Data migration is an extremely important part of an SVC implementation. Therefore, a data migration plan must be accurately prepared. You might need to migrate your data because of one of these reasons: To redistribute workload within a clustered system across the disk subsystem To move workload onto newly installed storage To move workload off old or failing storage, ahead of decommissioning it 100
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 03 Planning and Configuration Massimo.fm

To move workload to rebalance a changed workload To migrate data from an older disk subsystem to SVC-managed storage To migrate data from one disk subsystem to another disk subsystem Because there are multiple data migration methods, choose the method that best fits your environment, your operating system platform, your kind of data, and your applications service level agreement. We can define data migration as belonging to three groups: Based on operating system Logical Volume Manager (LVM) or commands Based on special data migration software Based on the SVC data migration feature With data migration, apply the following guidelines: Choose which data migration method best fits your operating system platform, your kind of data, and your service level agreement. Check the interoperability matrix for the storage subsystem to which your data is being migrated: http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html Choose where you want to place your data after migration in terms of the Storage Pools related to a specific storage subsystem tier. Check whether a sufficient amount of free space or extents are available in the target Storage Pool. Decide if your data is critical and must be protected by a volume mirroring option or if it must be replicated in a remote site for disaster recovery. Prepare offline all of the zone and LUN masking/host mappings that you might need, to minimize downtime during the migration. Prepare a detailed operation plan so that you do not overlook anything at data migration time. Execute a data backup before you start any data migration. Data backup must be part of the regular data management process. You might want to use the SVC as a data mover to migrate data from a non-virtualized storage subsystem to another non-virtualized storage subsystem. In this case, you might have to add additional checks that are related to the specific storage subsystem to which you want to migrate. Be careful using slower disk subsystems for the secondary volumes for high performance primary volumes, because SVC cache might not be able to buffer all the writes and flushing cache writes to SATA might slow I/O at the production site.

3.3.13 SVC configuration backup procedure


Save the configuration externally when changes, such as adding new nodes, disk subsystems, and so on, have been performed on the clustered system. Configuration saving is a crucial part of the SVC management, and various methods can be applied to back up your SVC configuration. Best practice is to implement an automatic configuration backup by applying the configuration backup command. We describe this command for the CLI and the GUI in Chapter 9, SAN Volume Controller operations using the command-line interface on page 467 and in Chapter 10, SAN Volume Controller operations using the GUI on page 631.

Chapter 3. Planning and configuration

101

7933 03 Planning and Configuration Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

3.4 Performance considerations


Although storage virtualization with the SVC improves flexibility and provides simpler management of a storage infrastructure, it can also provide a substantial performance advantage for a variety of workloads. The SVCs caching capability and its ability to stripe volumes across multiple disk arrays are the reasons why performance improvement is significant when implemented with midrange disk subsystems, because this technology is often only provided with high-end enterprise disk subsystems. Tip: Technically, almost all storage controllers provide both striping (RAID-5 or RAID-10) and a form of caching. The real benefit is the degree to which you can stripe the data across all MDisks in a storage pool and therefore have the maximum number of spindles active at one time. The caching is secondary. The SVC provides additional caching to what midrange controllers provide (usually a couple of GB), whereas enterprise systems have much larger caches. To ensure the desired performance and capacity of your storage infrastructure, it is best to undertake a performance and capacity analysis to reveal the business requirements of your storage environment. When this is done, you can use the guidelines in this chapter to design a solution that meets the business requirements. When discussing performance for a system, it always comes down to identifying the bottleneck, and thereby the limiting factor of a given system. You must also take into consideration the component for whose workload you identify a limiting factor, because it might not be the same component that is identified as the limiting factor for other workloads. When designing a storage infrastructure using SVC, or implementing SVC in an existing storage infrastructure, you must therefore take into consideration the performance and capacity of the SAN, the disk subsystems, the SVC, and the known or expected workload.

3.4.1 SAN
The SVC now has many models: 2145-8F4, 2145-8G4, 2145-8A4, 2145-CF8 and 2145-CG8. All of them can connect to 2 Gbps, 4 Gbps, or 8 Gbps switches. From a performance point of view, it is better to connect the SVC to 8 Gbps switches. Correct zoning on the SAN switch will bring security and performance together. Implement a dual HBA approach at the host to access the SVC.

3.4.2 Disk subsystems


From a performance perspective, there are a few guidelines in connecting to an SVC. Connect all storage ports to the switch up to a maximum of 16, and zone them to all of the SVC ports. Zone all ports on the disk back-end storage to all ports on the SVC nodes in a clustered system. Also ensure that you configure the storage subsystem LUN masking settings to map all LUNs used by the SVC to all the SVC WWPNs in the clustered system. The SVC is designed to handle large quantities of multiple paths from the back-end storage. In most cases, the SVC will be able to improve performance, especially on middle- to low-end disk subsystems, older disk subsystems with slow controllers, or uncached disk systems, for these reasons: The SVC has the capability to stripe across disk arrays, and it can do so across the entire set of supported physical disk resources. 102
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 03 Planning and Configuration Massimo.fm

The SVC has a 4 GB, 8 GB, or 24 GB cache in the last 2145-CF8 and 2145-CG8 models and it has an advanced caching mechanism. The SVC is capable of providing automated performance optimizing of hotspots through the use of Solid State Drives (SSDs) and Easy Tier. The SVCs large cache and advanced cache management algorithms also allow it to improve upon the performance of many types of underlying disk technologies. The SVCs capability to manage, in the background, the destaging operations incurred by writes (in addition to still supporting full data integrity) has the potential to be particularly important in achieving good database performance. Depending upon the size, age, and technology level of the disk storage system, the total cache available in the SVC can be larger, smaller, or about the same as that associated with the disk storage. Because hits to the cache can occur in either the upper (SVC) or the lower (disk controller) level of the overall system, the system as a whole can take advantage of the larger amount of cache wherever it is located. Thus, if the storage control level of cache has the greater capacity, expect hits to this cache to occur, in addition to hits in the SVC cache. Also, regardless of their relative capacities, both levels of cache will tend to play an important role in allowing sequentially organized data to flow smoothly through the system. The SVC cannot increase the throughput potential of the underlying disks in all cases, because this depends upon both the underlying storage technology and the degree to which the workload exhibits hot spots or sensitivity to cache size or cache algorithms. IBM SAN Volume Controller 4.2.1 Cache Partitioning, REDP-4426, explains the SVCs cache partitioning capability: http://www.redbooks.ibm.com/abstracts/redp4426.html?Open

3.4.3 SVC
The SVC clustered system is scalable up to eight nodes, and the performance is nearly linear when adding more nodes into an SVC clustered system, until it becomes limited by other components in the storage infrastructure. Although virtualization with the SVC provides a great deal of flexibility, it does not diminish the necessity to have a SAN and disk subsystems that can deliver the desired performance. Essentially, SVC performance improvements are gained by having as many MDisks as possible, therefore creating a greater level of concurrent I/O to the back-end without overloading a single disk or array. Assuming that there are no bottlenecks in the SAN or on the disk subsystem, remember that specific guidelines must be followed when you are performing these tasks: Creating a Storage Pool Creating volumes Connecting or configuring hosts that must receive disk space from an SVC clustered system You can obtain more detailed information about performance and best practices for the SVC in SAN Volume Controller Best Practices and Performance Guidelines, SG24-7521: http://www.redbooks.ibm.com/abstracts/sg247521.html?Open

3.4.4 Performance monitoring


Performance monitoring must be an integral part of the overall IT environment. For the SVC, as for the other IBM storage subsystems, the official IBM tool to collect performance statistics and supply a performance report is the TotalStorage Productivity Center.
Chapter 3. Planning and configuration

103

7933 03 Planning and Configuration Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

You can obtain more information about using the TotalStorage Productivity Center to monitor your storage subsystem in Monitoring Your Storage Subsystems with TotalStorage Productivity Center, SG24-7364: http://www.redbooks.ibm.com/abstracts/sg247364.html?Open See Chapter 10, SAN Volume Controller operations using the GUI on page 631, for detailed information about collecting performance statistics.

104

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 04 Initial Configuration Torben.fm

Chapter 4.

SAN Volume Controller initial configuration


In this chapter we discuss the following topics: Managing the cluster System Storage Productivity Center overview SAN Volume Controller (SVC) Hardware Management Console SVC initial configuration steps

Copyright IBM Corp. 2011. All rights reserved.

105

7933 04 Initial Configuration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

4.1 Managing the cluster


There are many ways to manage the SVC. The most commonly used are the following: Using the SVC Management GUI Using a PuTTY-based SVC command-line interface Using the System Storage Productivity Center (SSPC) Figure 4-1 shows the various ways to manage an SVC cluster.

Figure 4-1 SVC cluster management

Note that you have full management control of the SVC regardless of which method you choose. IBM System Storage Productivity Center is supplied by default when you purchase your SVC cluster. If you already have a previously installed SVC cluster in your environment, it is possible that you are using the SVC Console (Hardware Management Console (HMC)). You can still use it together with IBM System Storage Productivity Center, but you can only log in to your SVC from one of them at a time. If you decide to manage your SVC cluster with the SVC CLI, it does not matter if you are using the SVC Console or IBM System Storage Productivity Center, because the SVC CLI is located on the cluster and accessed through Secure Shell (SSH), which can be installed anywhere.

4.1.1 TCP/IP requirements for SAN Volume Controller


To plan your installation, consider the TCP/IP address requirements of the SAN Volume Controller cluster and the requirements for the SAN Volume Controller cluster to access other services. You must also plan the address allocation and the Ethernet router, gateway, and firewall configuration to provide the required access and network security.

106

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 04 Initial Configuration Torben.fm

Figure 4-2 shows the TCP/IP ports and services that are used by the SVC.

Figure 4-2 TCP/IP ports

For more information about TCP/IP prerequisites, see Chapter 3, Planning and configuration on page 67 and also the IBM System Storage Productivity Center: Introduction and Planning Guide, SC23-8824. To assist you in starting an SVC initial configuration, Figure 4-3 shows a common flowchart that covers all of the types of management.

Chapter 4. SAN Volume Controller initial configuration

107

7933 04 Initial Configuration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 4-3 SVC initial configuration flowchart

In the next sections, we describe each of the steps shown in Figure 4-3.

4.2 System Storage Productivity Center overview


The IBM System Storage Productivity Center (SSPC) is an integrated hardware and software solution that provides a single point of entry for managing SAN Volume Controller clusters, IBM System Storage DS8000 systems, and other components of your data storage infrastructure. SSPC simplifies storage management in the following ways: It centralizes the management of storage network resources with IBM storage management software. It provides greater synergy between storage management software and IBM storage devices. It reduces the number of servers that are required to manage your software infrastructure. It provides simple migration from basic device management to storage management applications that provide higher-level functions. The current release of System Storage Productivity Center (1.5) consists of the following components: IBM Tivoli Storage Productivity Center Basic Edition 4.2.1 is pre-installed on the System Storage Productivity Center server.

108

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 04 Initial Configuration Torben.fm

Tivoli Storage Productivity Center for Replication is pre-installed. An additional license is required. IBM System Storage DS Storage Manager 10.70 is available for you to optionally install on the System Storage Productivity Center server, or on a remote server. The DS Storage Manager 10.70 can manage the IBM DS3000, IBM DS4000, and IBM DS5000. With DS Storage Manager 10.70, when you use Tivoli Storage Productivity Center to add and discover a DS CIM Agent, you can launch the DS Storage Manager from the topology viewer, the Configuration Utility, or the Disk Manager of the Tivoli Storage Productivity Center. IBM Java 1.6 is pre-installed. IBM Java is pre-installed and supports DS Storage Manager 10.70. You do not need to download Java from Sun Microsystems. DS CIM Agent management commands. The DS CIM Agent management commands (DSCIMCLI) for 5.5.0.3 are pre-installed on the System Storage Productivity Center. SSPC supports SVC 6.1 and later code levels, as well as the IBM System Storage Storwize V7000. It also supports manual install of the 5.1 GUI (the SVC Console needed for SVC 5.1 or previous SVC releases is also available on the IBM website). With SVC 6.1 and later code levels, the GUI console is embedded in the SVC Cluster, so there is no longer a need to install any SVC software directly on the SSPC. IBM DB2 Enterprise Server Edition PuTTY (SSH client software) Figure 4-4 shows the product stack in the IBM System Storage Productivity Center Console 1.5.

Figure 4-4 Overview of the IBM System Storage Productivity Center

Chapter 4. SAN Volume Controller initial configuration

109

7933 04 Initial Configuration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

IBM System Storage Productivity Center has all of the software components pre-installed and tested on a System xTM machine model IBM System Storage Productivity Center 2805-MC5 with Windows installed on it. All the software components installed on the IBM System Storage Productivity Center can be ordered and installed on hardware that meets or exceeds minimum requirements. For a detailed guide to the IBM System Storage Productivity Center, refer to IBM System Storage Productivity Center Software Installation and Users Guide, SC23-8823. For information pertaining to physical connectivity to the SVC, see Chapter 3, Planning and configuration on page 67.

4.2.1 IBM System Storage Productivity Center hardware


The hardware used by the IBM System Storage Productivity Center solution is the IBM System Storage Productivity Center 2805-MC5. It is a 1U rack-mounted server. It has the following initial configuration: One Intel Xeon E5630 quad-core, with speed of 2.53GHz, with L3 cache of 12 MB 8 GB of RAM PC3-10600 1333 MHz Two 2.5" SAS Open Bay - hard disk drives Two Broadcom 5709C Ethernet cards One CD/DVD bay with read and write-read capability Microsoft Windows 2008 Enterprise Edition Optional secondary power supply It is designed to perform System Storage Productivity Center functions. If you plan to upgrade System Storage Productivity Center for more functions, you can purchase the Performance Upgrade Kit to add more capacity to your hardware.

4.2.2 SVC installation planning information for System Storage Productivity Center
Consider the following steps when planning the System Storage Productivity Center installation: Verify that the hardware and software prerequisites have been met. Determine the location of the rack where the System Storage Productivity Center is to be installed. Verify that the System Storage Productivity Center will be installed in line of sight to the SVC nodes. Verify that you have a keyboard, mouse, and monitor available to use. Determine the cabling required. Determine the network IP address. Determine the System Storage Productivity Center host name.

110

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 04 Initial Configuration Torben.fm

For detailed installation guidance, see IBM System Storage Productivity Center: Introduction and Planning Guide, SC23-8824: https://www-304.ibm.com/systems/support/supportsite.wss/supportresources?brandind= 5000033&familyind=5356448 Also see IBM Tivoli Storage Productivity Center IBM Tivoli Storage Productivity Center for Replication Installation and Configuration Guide, SC27-2337: http://http://www-01.ibm.com/support/docview.wss?rs=1181&uid=ssg1S7002597 Figure 4-5 shows the front view of the System Storage Productivity Center Console based on the 2805-MC5 hardware.

Figure 4-5 System Storage Productivity Center 2805-MC5 front view

Figure 4-6 shows a rear view of System Storage Productivity Center Console based on the 2805-MC5 hardware.

Figure 4-6 System Storage Productivity Center 2805-MC5 rear view

4.3 Setting up the SVC cluster


This section provides the step-by-step instructions that are needed to create the cluster. You must create a cluster to use SAN Volume Controller virtualized storage. The first phase to create a cluster is performed from the front panel of the SAN Volume Controller (see 4.3.3, Initiating cluster creation from the front panel on page 115). The second phase is performed from a web browser accessing the management GUI (see 4.4, Configuring the GUI on page 118).

4.3.1 Introducing the service panels


This section gives you an overview of the service panels you have available, depending on your SVC nodes. Use Figure 4-7 as a reference for the SVC 2145-8F2 and 2145-8F4 node model buttons to be pressed in the steps that follow.

Chapter 4. SAN Volume Controller initial configuration

111

7933 04 Initial Configuration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 4-7 SVC 8F2 node and SVC 8F4 node front and operator panel

Use Figure 4-8 for the SVC Node 2145-8G4 and 2145-8A4 models.

112

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 04 Initial Configuration Torben.fm

Figure 4-8 SVC 8G4 node front and operator panel

Use Figure 4-9 as a reference for the SVC Node 2145-CF8 model; the figure shows the CF8 model front panel.

Chapter 4. SAN Volume Controller initial configuration

113

7933 04 Initial Configuration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 4-9 CF8 front panel

See Figure 4-10 for the SVC Node 2145-CG8 model.

Figure 4-10 SVC CG8 node front and operator panel

SVC V6.1 and later code levels, introduces a new method for performing service tasks. In addition to being able to perform service tasks from the front panel, you can also service a node through an Ethernet connection using either a web browser or the command-line interface. An additional Service IP address for each node canister is required. For more details see 4.4.3, Configuring the Service IP Addresses on page 131 and 10.17, Service Assistant with the GUI on page 863.

114

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 04 Initial Configuration Torben.fm

4.3.2 Prerequisites
Ensure that the SVC nodes are physically installed and that Ethernet and Fibre Channel connectivity has been correctly configured. For information about physical connectivity to the SVC, see Chapter 3, Planning and configuration on page 67. Prior to configuring the cluster, ensure that the following information is available: License The license indicates whether the client is permitted to use FlashCopy, MetroMirror, or both. It also indicates how much capacity the client is licensed to virtualize. For IPv4 addressing Cluster IPv4 addresses - These addresses include one address for the cluster and another address for the service address. IPv4 subnet mask. Gateway IPv4 address. For IPv6 addressing Cluster IPv6 addresses - These addresses include one address for the cluster and another address for the service address. IPv6 prefix. Gateway IPv6 address. You must create a cluster to use the SAN Volume Controller virtualized storage. The first phase to create a cluster is performed from the front panel of the SAN Volume Controller. The second phase is performed from a web browser accessing the management GUI.

4.3.3 Initiating cluster creation from the front panel


After the hardware is physically installed into racks, complete the following steps to initially configure the cluster through the physical service panel; see 4.3.1, Introducing the service panels on page 111. 1. Choose any node that is to become a member of the cluster being created. Note: To add additional nodes to your cluster, use a separate process after you have successfully created and initialized the cluster on the selected node. 2. Press and release the up or down button until Actions is displayed. Important: If a time-out occurs when you enter the input for the fields during these steps, you must begin again from step 2. All of the changes are lost, so be sure to have all of the information available before beginning again. 3. Press and release the select button. 4. Depending on whether you are creating a cluster with an IPv4 address or an IPv6 address, press and release the up or down button until either New Cluster IPv4? or New Cluster IPv6? is displayed. Figure 4-11 shows the various options for the cluster creation.

Chapter 4. SAN Volume Controller initial configuration

115

7933 04 Initial Configuration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 4-11 Cluster IPv4? and Cluster IPv6? options on the front panel display

If the New Cluster IPv4? or New Cluster IPv6? actions are displayed, move directly to step 5. If the New Cluster IPv4? or New Cluster IPv6? actions are not displayed, it means that this node is already a member of a cluster. a. Press and release the up or down button until Actions is displayed. b. Press and release the select button to return to the Main Options menu. c. Press and release the up or down button until Cluster: is displayed. The name of the cluster that the node belongs to is displayed on line 2 of the panel. In this case there are two options. a. If you want to delete this node from cluster: i. Press and release the up or down button until Actions is displayed. ii. Press and release the select button. iii. Press and release the up or down button until Remove Cluster? is displayed. iv. Press and hold the up button v. Press and release the select button. vi. Press and release the up or down button until Confirm remove? is displayed. vii. Press and release the select button. viii.Release the up button. ix. Then, release the up button, which deletes the cluster information from the node. Go back to step 1 on page 115 and start again. b. If you do not want this node to be removed from an existing cluster, review the situation and determine the correct nodes to include in the new cluster. 5. Press and release the select button to create the new cluster. 6. Press and release the select button again to modify the IP. 7. Use the up or down navigation buttons to change the value of the first field of the IP address to the value that has been chosen.

116

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 04 Initial Configuration Torben.fm

Notes: For IPv4, pressing and holding the up or down buttons will increment or decrease the IP address field by units of 10. The field value rotates from 0 to 255 with the down button, and from 255 to 0 with the up button. For IPv6, the address and the gateway address consist of eight 4-digit hexadecimal values. Enter the full address by working across a series of four panels to update each of the 4-digit hexadecimal values that make up the IPv6 addresses. The panels consist of eight fields, where each field is a 4-digit hexadecimal value. 8. Use the right navigation button to move to the next field. Use the up or down navigation buttons to change the value of this field. 9. Repeat step 7 for each of the remaining fields of the IP address. 10.When the last field of the IP address has been changed, press the select button. 11.Press the right arrow button: a. For IPv4, IPv4 Subnet: is displayed. b. For IPv6, IPv6 Prefix: is displayed. 12.Press the select button. 13.Change the fields for IPv4 Subnet in the same way that the IPv4 IP address fields were changed. There is only a single field for IPv6 Prefix. 14.When the last field of IPv4 Subnet/IPv6 Mask has been changed, press the select button. 15.Press the right navigation button: a. For IPv4, IPv4 Gateway: is displayed. b. For IPv6, IPv6 Gateway: is displayed. 16.Press the select button. 17.Change the fields for the appropriate Gateway in the same way that the IPv4/IPv6 address fields were changed. 18.When the changes to all of the Gateway fields have been made, press the select button. 19.To review the settings before creating the cluster, use the right and left buttons. Make any necessary changes, then use right and left buttons to Confirm Created?, and press the select button. 20.After you complete this task, the following information is displayed on the service display panel: Cluster: is displayed on line 1. A temporary, system-assigned cluster name that is based on the IP address is displayed on line 2. If the cluster is not created, Create Failed: is displayed on line 1 of the service display. Line 2 contains an error code. Refer to the error codes that are documented in IBM System Storage SAN Volume Controller: Service Guide, GC26-7901, to identify the reason why the cluster creation failed and the corrective action to take. After you have created the cluster on the front panel with the correct IP address format, you can finish the cluster configuration by accessing the management GUI, completing the Create Cluster wizard, and adding nodes to the cluster.

Chapter 4. SAN Volume Controller initial configuration

117

7933 04 Initial Configuration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Important: At this time, do not repeat this procedure to add other nodes to the cluster. To add nodes to the cluster, follow the steps described in 9.9.2, Adding a node on page 527 and in 10.12.3, Adding a node to the cluster on page 804.

4.4 Configuring the GUI


After you have performed the activities in 4.3, Setting up the SVC cluster on page 111, complete the cluster setup by using the SVC Console. Follow the steps detailed in 4.4.1, Completing the Create Cluster Wizard on page 118, to create the cluster and complete the configuration. Important: Make sure that the SVC cluster IP address (svcclusterip) can be reached successfully by entering a ping command from the network.

4.4.1 Completing the Create Cluster Wizard


You can easily access the management GUI by opening any supported web browser. 1. Open the Web GUI from the SSPC Console or from a supported web browser on any workstation that can communicate with the cluster. Recommended web browser is Firefox version 6. Open a supported web browser and point to the IP address that you entered in step 7 on page 116: http://svcclusteripaddress/
(Please notice, that it will redirect you to https://svcclusteripadress/ - which is default for access to the SVC cluster)

Figure 4-12 shows the SVC 6.3 Welcome window.

118

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 04 Initial Configuration Torben.fm

Figure 4-12 Welcome window

2. Enter the default superuser password: passw0rd (with a zero) and click Continue, as shown in Figure 4-13.

Figure 4-13 Login window

3. On the next page, read the license agreement carefully. To agree with it, select I agree with the terms in the license agreement and click Next as shown in Figure 4-14.

Chapter 4. SAN Volume Controller initial configuration

119

7933 04 Initial Configuration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 4-14 License Agreement window

4. At the Name, Date, and Time window (Figure 4-15), fill in the following details: A Cluster Name (System Name): This name is case sensitive and can consist of A to Z, a to z, 0 to 9, and the underscore (_). It cannot start with a number. It has a minimum of one character and a maximum of 60 characters. A Time Zone: You can select the time zone for the cluster here. A Date and a Time: Here you can change the date and the time of your cluster. If you are using an Network Time Protocol (NTP) server, you can enter the IP address of the NTP server by selecting Set NTP Server IP Address. Click Next to confirm your changes.

Figure 4-15 Name, Date and Time window

120

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 04 Initial Configuration Torben.fm

5. The Change Date and Time Settings window appears to complete updates on the cluster; see Figure 4-16. When the task is completed, click Close.

Figure 4-16 Change Date and Time Settings window

6. Next, the System License window is displayed, as shown in Figure 4-17. To continue, fill out the fields for Virtualization Limit, FlashCopy Limit, Global and Metro Mirror and Real-Time Compression Limit for the number of Terabytes that are licensed. If you do not have a license for any of these features, leave the value at 0. Click Next.

Figure 4-17 System License Settings

7. The Configure Email Event Notification window is displayed as shown in Figure 4-18.

Chapter 4. SAN Volume Controller initial configuration

121

7933 04 Initial Configuration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 4-18 Configure Email Event Notification window

If you do not want to configure it or if you want to do it later, click Next and go to step 8 on page 125. To ensure your system continues to run smoothly, you can enable email event notifications. Email event notifications send messages about error, warning, or informational events and inventory reports to an email address of local or remote support personnel. Ensure that all the information is valid, or email notification is disabled. If you want to configure it, click Configure Email Event Notifications and a wizard appears. a. On the first page, shown in Figure 4-19, fill in the information required to enable IBM Support personnel to contact this person to assist with problem resolution (Contact Name, Email Reply Address, Machine Location and Phone). Ensure that all contact information is valid. Then, click Next.

Figure 4-19 Define Company Contact information

b. On the next page, shown in Figure 4-20, configure at least one email server that is used by your site and optionally, enable inventory reporting. Enter a valid IP address and a server port for each server added. Ensure that the email servers are valid. Inventory reports allow IBM service personnel to proactively notify you of any known issues with your system. To activate it, enable inventory reporting and choose a Reporting Interval in this window.

122

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 04 Initial Configuration Torben.fm

Figure 4-20 Configure Email Servers and Inventory Reporting window

c. Next, as shown on Figure 4-21, you can configure email addresses to receive notifications. It is a best practice to have one of the email addresses be a support user with the error event notification type enabled to notify IBM service personnel if an error condition occurs on your system. Ensure that all email addresses are valid.

Figure 4-21 Configure Email Addresses window

d. The last window, Figure 4-21, is a summary of your Email Event Notification wizard. Click Finish to complete the setup.

Chapter 4. SAN Volume Controller initial configuration

123

7933 04 Initial Configuration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 4-22 Email event Notification Summary window

e. The wizard is now closed and additional information has been added, as shown in Figure 4-23. You can edit or discard your changes from this window. Then, click Next.

Figure 4-23 Configure Email Event Notification window with configuration information

124

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 04 Initial Configuration Torben.fm

8. Next, you can add available nodes to your cluster; see Figure 4-24.

Figure 4-24 Hardware window

To complete this operation, click an empty node position to view the candidate nodes. Important: Keep in mind that you need to have at least two nodes by IO group. Add your available nodes in sequence. For an empty slot, select the node you want to add to your cluster using the drop-down list. Then change its name and click Add Node, as shown in Figure 4-25.

Figure 4-25 Add a node to the cluster

Chapter 4. SAN Volume Controller initial configuration

125

7933 04 Initial Configuration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

A pop-up window appears to inform you about the time required to add a node to the cluster; see Figure 4-26. If you want to add it, click the OK button.

Figure 4-26 Warning message

The Add New Node window appears to complete the update on the cluster, as shown on Figure 4-27. When the task is completed, click Close.

Figure 4-27 Add New Node window

After your node has been successfully added to the cluster, you have an updated view of the Figure 4-24 as shown in Figure 4-28.

126

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 04 Initial Configuration Torben.fm

Figure 4-28 Hardware window with a second node added to the cluster

When all your nodes have been added to your cluster, click Finish. 9. Several operations will be done to update the cluster configuration, as shown in Figure 4-29. When the task is completed, click Close.

Figure 4-29 Final cluster update window

10.Your cluster is now successfully created. However, there are several remaining tasks to be completed before you use the cluster, such as changing the default superuser password or defining an IP address for service. We guide you through these tasks in the following sections.

Chapter 4. SAN Volume Controller initial configuration

127

7933 04 Initial Configuration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

4.4.2 Changing the default superuser password


1. Log into the cluster using your web browser, and enter the user superuser and its default password: passw0rd (with a zero) as shown in Figure 4-30. Then click Login.

Figure 4-30 Login window

2. From the GUI, select Access Users as shown in Figure 4-31.

128

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 04 Initial Configuration Torben.fm

Figure 4-31 Users windows

3. Right -click the superuser user and select Properties as shown in Figure 4-32.

Figure 4-32 Edit superuser settings window

4. Click Change, as shown in Figure 4-33.

Chapter 4. SAN Volume Controller initial configuration

129

7933 04 Initial Configuration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 4-33 User Properties windows

5. Enter the new password twice and validate your change by clicking OK, as shown in Figure 4-34.

Figure 4-34 Modifying password

130

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 04 Initial Configuration Torben.fm

4.4.3 Configuring the Service IP Addresses


Configuring this IP is important because it will let you access the Service Assistant Tool. If there is an issue with a node, it will allow you to view a detailed status and error summary, and manage service actions on it. 1. To configure the Service IP Addresses, select Configuration Network as shown in Figure 4-35.

Figure 4-35 Network window

2. Select Service IP addresses as shown in Figure 4-36.

Figure 4-36 Service IP Addresses window

3. Select one node, then click the port you want to assign a service IP address; see Figure 4-37.

Chapter 4. SAN Volume Controller initial configuration

131

7933 04 Initial Configuration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 4-37 Configure Service IP window

4. Depending on whether you have installed an IPv4 or an IPv6 cluster, there is other information to enter. For IPv4: Type an IPv4 address in the IP Address field. Type an IPv4 gateway in the Gateway field. Type an IPv4 Subnet Mask. For IPv6: Select the Show IPv6 Button Type an IPv6 prefix in the IPv6 Network Prefix field. The Prefix field can have a value of 0 to 127. Type an IPv6 address in the IP Address field. Type an IPv6 gateway in the Gateway field. After the information has been entered, click OK to confirm modification as shown in Figure 4-38.

Figure 4-38 Service IP window

5. Repeat steps 3 and 4 for each node in your cluster.

4.4.4 Postrequisites
Perform the following steps to complete the SVC cluster configuration. We explain all of these steps in greater detail in Chapter 9, SAN Volume Controller operations using the

132

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 04 Initial Configuration Torben.fm

command-line interface on page 467, and in Chapter 10, SAN Volume Controller operations using the GUI on page 631. a. Configure SSH keys for the command line user, as shown in 4.5, Secure Shell overview on page 133. b. Configure user authentication and authorization. c. Set up event notifications and inventory reporting. d. Create the storage pools. e. Add an MDisk to the storage pool. f. Identify and create volumes. g. Create a map host objects map. h. Identify and configure FlashCopy mappings and Metro Mirror relationship. i. Back up configuration data.

4.5 Secure Shell overview


Since SVC 5.1, SSH keys authentication is no longer needed for the GUI, Nor is it required for the SVC command-line interface. Beginning with SVC 6.3 you have the choice of choosing between either password or ssh key authentication, or even choosing both password and ssh key authentication for the SVC command-line interface. Secure Shell is explained in the following sections. Tip: If you choose not to create an ssh key pair, you can still access the SVC cluster using the SVC command-line interface, as long as the user has a password as you will be authenticated through the username and password. The connection is secured by means of a private key and a public key pair: 1. A public key and a private key are generated together as a pair. 2. A public key is uploaded to the SSH server (SVC Cluster). 3. A private key identifies the client and is checked against the public key during the connection. The private key must be protected. 4. The SSH server must also identify itself with a specific host key. 5. If the client does not have that host key yet; instead, it is added to a list of known hosts. Secure Shell is the communication vehicle between the management system (the System Storage Productivity Center or any workstation) and the SVC cluster. The SSH client provides a secure environment from which to connect to a remote machine. It uses the principles of public and private keys for authentication. SSH keys are generated by the SSH client software. The SSH keys include a public key, which is uploaded and maintained by the cluster, and a private key that is kept private to the workstation that is running the SSH client. These keys authorize specific users to access the administration and service functions on the cluster. Each key pair is associated with a user-defined ID string that can consist of up to 40 characters. Up to 100 keys can be stored on the cluster. New IDs and keys can be added, and unwanted IDs and keys can be deleted.

Chapter 4. SAN Volume Controller initial configuration

133

7933 04 Initial Configuration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

To use the CLI, an SSH client must be installed on that system; the SSH key pair must be generated on the client system; and the clients SSH public key must be stored on the SVC clusters. The System Storage Productivity Center or other any workstation must have the freeware implementation of SSH-2 for Windows called PuTTY pre-installed. This software provides the SSH client function for users logged into the SVC Console that want to invoke the CLI to manage the SVC cluster.

4.5.1 Generating public and private SSH key pairs using PuTTY
Perform the following steps to generate SSH keys on the SSH client system: Start the PuTTY Key Generator to generate public and private SSH keys. From the client desktop, select Start Programs PuTTY PuTTYgen. 6. On the PuTTY Key Generator GUI window (Figure 4-39), generate the keys: a. Select SSH2 RSA. b. Leave the number of bits in a generated key value at 1024. c. Click Generate.

Figure 4-39 PuTTY key generator GUI

7. Move the cursor onto the blank area to generate the keys. To generate keys: The blank area indicated by the message is the large blank rectangle on the GUI inside the section of the GUI labeled Key. Continue to move the mouse pointer over the blank area until the progress bar reaches the far right. This action generates random characters to create a unique key pair.

134

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 04 Initial Configuration Torben.fm

8. After the keys are generated, save them for later use: a. Click Save public key, as shown in Figure 4-40.

Figure 4-40 Saving the public key

b. You are prompted for a name (for example, pubkey) and a location for the public key (for example, C:\Support Utils\PuTTY). Click Save. If another name or location is chosen, ensure that a record of the name or location is kept, because the name and location of this SSH public key must be specified in the steps that are documented in 4.5.2, Uploading the SSH public key to the SVC cluster on page 136. Tip: The PuTTY Key Generator saves the public key with no extension, by default. Use the string pub in naming the public key, for example, pubkey, to easily differentiate the SSH public key from the SSH private key. c. In the PuTTY Key Generator window, click Save private key. d. You are prompted with a warning message, as shown in Figure 4-41. Click Yes to save the private key without a passphrase.

Figure 4-41 Saving the private key without a passphrase

Chapter 4. SAN Volume Controller initial configuration

135

7933 04 Initial Configuration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

e. When prompted, enter a name (for example, icat) and location for the private key (for example, C:\Support Utils\PuTTY). Click Save. We suggest that you use the default name icat.ppk, because in SVC clusters running on versions prior to SVC 5.1, this key has been used for icat application authentication and must have this default name. Private key extension: The PuTTY Key Generator saves the private key with the PPK extension. 9. Close the PuTTY Key Generator GUI. 10.Navigate to the directory where the private key was saved (for example, C:\Support Utils\PuTTY).

4.5.2 Uploading the SSH public key to the SVC cluster


After you have created your SSH key pair, you need to upload your SSH private key onto the SVC cluster: 1. From your browser: https://svcclusteripaddress/ From the GUI interface, go to the Access management interface as shown in Figure 4-31. Select Users, and then on the next window, select Create a User from the list as shown in Figure 4-42, and then click Go.

Figure 4-42 Create a user

2. From the Create a User window, insert the user ID name that you want to create and the password. Also select the access level that you want to assign to your user (remember that the Security Administrator is the maximum level) and choose the location where you want to upload the SSH pub key file from you have created for this user, as shown in Figure 4-43. Click Ok. 136
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 04 Initial Configuration Torben.fm

Figure 4-43 Create user and password

3. You have completed your user creation process and uploaded the users SSH public key that will be paired later with the users private.ppk key, as described in 4.5.3, Configuring the PuTTY session for the CLI on page 137. Figure 4-44 shows the successful upload of the SSH admin key.

Figure 4-44 Adding the SSH admin key successfully

You have now completed the basic setup requirements for the SVC cluster using the SVC cluster web interface.

4.5.3 Configuring the PuTTY session for the CLI


Before the CLI can be used, the PuTTY session must be configured either using the SSH keys that were generated earlier in 4.5.1, Generating public and private SSH key pairs using PuTTY on page 134, or by username, if you have configured the user without an ssh key.

Chapter 4. SAN Volume Controller initial configuration

137

7933 04 Initial Configuration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Perform these steps to configure the PuTTY session on the SSH client system: 1. From the System Storage Productivity Center Windows desktop, select Start Programs PuTTY PuTTY to open the PuTTY Configuration GUI window. 2. In the PuTTY Configuration window (Figure 4-45), from the Category pane on the left, click Session, if it is not selected. Tip: The items selected in the Category pane affect the content that appears in the right pane.

Figure 4-45 PuTTY Configuration window

3. In the right pane, under the Specify the destination you want to connect to section, select SSH. Under the Close window on exit section, select Only on clean exit, which ensures that if there are any connection errors, they will be displayed on the users window. 4. From the Category pane on the left side of the PuTTY Configuration window, click Connection SSH to display the PuTTY SSH Configuration window, as shown in Figure 4-46.

138

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 04 Initial Configuration Torben.fm

Figure 4-46 PuTTY SSH connection configuration window

5. In the right pane, in the Preferred SSH protocol version section, select 2. 6. From the Category pane on the left side of the PuTTY Configuration window, select Connection SSH Auth. 7. On Figure 4-47, in the right pane, in the Private key file for authentication: field under the Authentication Parameters section, either browse to or type the fully qualified directory path and file name of the SSH client private key file created earlier (for example, C:\Support Utils\PuTTY\icat.PPK). See Figure 4-47. 8. You can skip the Connection SSH Auth. part if you created the user only with password authentication and no ssh key.

Chapter 4. SAN Volume Controller initial configuration

139

7933 04 Initial Configuration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 4-47 PuTTY Configuration: Private key location

9. From the Category pane on the left side of the PuTTY Configuration window, click Session. 10.In the right pane, follow these steps, as shown in Figure 4-48: a. Under the Load, save, or delete a stored session section, select Default Settings, and click Save. b. For the Host Name (or IP address), type the IP address of the SVC cluster. c. In the Saved Sessions field, type a name (for example, SVC) to associate with this session. d. Click Save.

140

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 04 Initial Configuration Torben.fm

Figure 4-48 PuTTY Configuration: Saving a session

You can now either close the PuTTY Configuration window or leave it open to continue. Tips: When you want to enter the Host Name or IP address in Putty, insert your SVC user followed by @ previous to your Host Name or IP address as shown previously. this way you will not have to enter your user each time you want to access your SVC cluster. Notice that if you havent created an ssh key, you will be prompted for the password you set for the user. Normally, output that comes from the SVC is wider than the default PuTTY window size. Change your PuTTY window appearance to use a font with a character size of 8. To change, click the Appearance item in the Category tree, as shown in Figure 4-48, and then click Font. Choose a font with a character size of 8.

4.5.4 Starting the PuTTY CLI session


The PuTTY application is required for all CLI tasks. If it was closed for any reason, restart the session as detailed here: 1. From the SVC Console desktop, open the PuTTY application by selecting Start Programs PuTTY. 2. On the PuTTY Configuration window (Figure 4-49), select the session saved earlier (in our example, ITSO-SVC1), and click Load. 3. Click Open.

Chapter 4. SAN Volume Controller initial configuration

141

7933 04 Initial Configuration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 4-49 Open PuTTY command-line session

4. If this is the first time that the PuTTY application is being used since you generated and uploaded the SSH key pair, a PuTTY Security Alert window with a prompt opens stating that there is a mismatch between the private and public keys, as shown in Figure 4-50. Click Yes, which invokes the CLI.

Figure 4-50 PuTTY Security Alert

5. As shown in Example 4-1, the private key used in this PuTTY session is now authenticated against the public key that was uploaded to the SVC cluster.
Example 4-1 Authenticating

Using username "admin". Authenticating with public key "rsa-key-20100909" IBM_2145:ITSO_SVC1:admin> You have now completed the tasks that are required to configure the CLI for SVC administration from the SVC Console. You can close the PuTTY session.

142

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 04 Initial Configuration Torben.fm

4.5.5 Configuring SSH for AIX clients


To configure SSH for AIX clients, follow these steps: 1. The SVC cluster IP address must be able to be successfully reached using the ping command from the AIX workstation from which cluster access is desired. 2. Open SSL must be installed for OpenSSH to work. Install OpenSSH on the AIX client: a. The installation images can be found at this website: https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?source=aixbp http://sourceforge.net/projects/openssh-aix b. Follow the instructions carefully, because OpenSSL must be installed before using SSH. 3. Generate an SSH key pair: a. Run the cd command to go to the /.ssh directory. b. Run the ssh-keygen -t rsa command. c. The following message is displayed: Generating public/private rsa key pair. Enter file in which to save the key (//.ssh/id_rsa) d. Pressing Enter will use the default file that is shown in parentheses; otherwise, enter a file name (for example, aixkey), and press Enter. e. The following prompt is displayed: Enter a passphrase (empty for no passphrase) When the CLI will be used interactively, enter a passphrase because there is no other authentication when connecting through the CLI. After typing in the passphrase, press Enter. f. The following prompt is displayed: Enter same passphrase again: Type the passphrase again, and then, press Enter again. g. A message is displayed indicating that the key pair has been created. The private key file will have the name entered previously (for example, aixkey). The public key file will have the name entered previously with an extension of .pub (for example, aixkey.pub). Using a passphrase: If you are generating an SSH keypair so that you can interactively use the CLI, use a passphrase so you will need to authenticate every time that you connect to the cluster. It is possible to have a passphrase-protected key for scripted usage, but you will have to use the expect command or a similar command to have the passphrase parsed into the ssh command.

4.6 Using IPv6


You can use IPv4, or IPv6 in a dual stack configuration. Migrating to (or from) IPv6 can be done remotely and is nondisruptive. Using IPv6: To remotely access the SVC clusters running IPv6, you are required to run a supported web browser and have IPv6 configured on your local workstation.

Chapter 4. SAN Volume Controller initial configuration

143

7933 04 Initial Configuration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

4.6.1 Migrating a cluster from IPv4 to IPv6


As a prerequisite, have IPv6 already enabled and configured on your local workstation. In our case, we have configured an interface with IPv4 and IPv6 addresses on the System Storage Productivity Center, as shown in Example 4-2.
Example 4-2 Output of ipconfig on System Storage Productivity Center

C:\Documents and Settings\Administrator>ipconfig Windows IP Configuration

Ethernet adapter IPv6: Connection-specific IP Address. . . . . Subnet Mask . . . . IP Address. . . . . IP Address. . . . . Default Gateway . . DNS . . . . . . . . . . Suffix . . . . . . . . . . . . . . . . . . . . . . . . . . : : : : : :

10.0.1.115 255.255.255.0 2001:610::115 fe80::214:5eff:fecd:9352%5

To update a cluster, follow these steps: 1. Select Configuration Network, as shown in Figure 4-51.

Figure 4-51 Network window

144

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 04 Initial Configuration Torben.fm

2. Select Management IP Addresses, then click port 1 of one of the node as shown in Figure 4-52.

Figure 4-52 Management IP Addresses

3. In the window that is shown in Figure 4-53, follow these steps: a. Select Show IPv6. b. Type an IPv6 prefix in the IPv6 Network Prefix field. The Prefix field can have a value of 0 to 127. c. Type an IPv6 address in the IP Address field. d. Type an IPv6 gateway in the Gateway field. e. Click OK.

Figure 4-53 Modify IP Addresses: Adding IPv6 addresses

4. A confirmation window displays (Figure 4-54). Click Apply Changes.

Chapter 4. SAN Volume Controller initial configuration

145

7933 04 Initial Configuration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 4-54 Confirm changes window

5. The Change Management task is launched on the server as shown in Figure 4-55. Click Close when the task is completed.

Figure 4-55 Change Management IP window

6. Test the IPv6 connectivity using the ping command from a cmd.exe session on your local workstation (as shown in Example 4-3).
Example 4-3 Testing IPv6 connectivity to the SVC cluster

C:\Documents and Settings\Administrator>ping 2001:0610:0000:0000:0000:0000:0000:119 Pinging 2001:610::119 from 2001:610::115 with 32 bytes of data: Reply Reply Reply Reply from from from from 2001:610::119: 2001:610::119: 2001:610::119: 2001:610::119: time=3ms time<1ms time<1ms time<1ms

Ping statistics for 2001:610::119: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),

146

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 04 Initial Configuration Torben.fm

Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 3ms, Average = 0ms 7. Test the IPv6 connectivity to the cluster using a compatible IPv6 and SVC web browser on your local workstation; see Figure 4-56.

Figure 4-56 Testing IPv6 SVC GUI access using a compatible web browser

Tip: To access an IPv6 address in a web browser, you need to put this IP between square brackets as shown at the top in Figure 4-56. 8. Finally, remove the IPv4 address in the SVC GUI accessing the same windows as shown in Figure 4-53, and validate this change by clicking OK.

4.6.2 Migrating a cluster from IPv6 to IPv4


The process of migrating a cluster from IPv6 to IPv4 is identical to the process described in 4.6.1, Migrating a cluster from IPv4 to IPv6 on page 144, except that you add IPv4 addresses and remove the IPv6 addresses.

Chapter 4. SAN Volume Controller initial configuration

147

7933 04 Initial Configuration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

148

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 05 Host Configuration Christian.fm

Chapter 5.

Host configuration
In this chapter we describe the basic host configuration procedures that are required to attach supported hosts to the IBM System Storage SAN Volume Controller (SVC).

Copyright IBM Corp. 2011. All rights reserved.

149

7933 05 Host Configuration Christian.fm

Draft Document for Review January 17, 2012 6:10 am

5.1 Host attachment overview for IBM System Storage SAN Volume Controller
The IBM System Storage SAN Volume Controller supports a wide range of host types (both IBM and non-IBM), thereby making it possible to consolidate storage in an open systems environment into a common pool of storage. The storage pool can then be utilized and managed more efficiently as a single entity from a central point on the SAN. The benefits of storage virtualization have been discussed in more depth earlier in this book The ability to consolidate storage for attached open systems hosts provides the following benefits: Unified, easier storage management Increased utilization rate of the installed storage capacity Advanced Copy Services functions are offered across storage systems from different vendors Only one kind of multi-path driver to consider when attaching hosts

5.2 SVC setup


In the vast majority of IBM SAN Volume Controller (SVC) environments, where high performance and high availability requirements exist, hosts are attached through a Storage Area Network (SAN) utilizing the Fibre Channel protocol. Even though there are other supported SAN configurations - e.g. single fabric design - it is a best practice and also a commonly used setup to have the SAN consisting of two independent fabrics. This design provides redundant paths and prevents unwanted interferences between both fabrics in a case, where an incident might affect one of the fabrics. Starting with SVC 5.1, iSCSI connectivity was introduced to provide an alternative method to attach hosts through an Ethernet Local Area Network (LAN). However, any inter-node communication within the SVC clustered system, between the SVC and its back-end storage subsystems, and also between SVC clustered systems solely takes place through Fibre Channel (FC). More information on SVC iSCSI connectivity is available in 5.3, iSCSI on page 156 Redundant paths to volumes can be provided for both SAN and iSCSI attached hosts. Figure 5-1 on page 151 shows the types of attachment that are supported with the SVC 6.3 release.

150

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 05 Host Configuration Christian.fm

Figure 5-1 SVC host attachment overview

5.2.1 Fibre Channel and SAN setup overview


Host attachment to the SVC via Fibre Channel (FC) must be made via a SAN fabric as direct attachment to the SVC nodes is not supported. For SVC configurations, it is a best practice to use two redundant SAN fabrics. Therefore it is recommended to have each host equipped with a minimum of two host bus adapters (HBAs) or at least a dual-port HBA with each HBA connected to a SAN switch in either fabric. SVC imposes no particular limit on the actual distance between SVC nodes and host servers. A server can therefore be attached to an edge switch in a core-edge configuration, the SVC cluster resides at the core of the fabric. For host attachment SVC supports up to three interswitch link (ISL) hops in the fabric, which means that the server to SVC can be separated by up to five Fibre Channel links, four of which can be 10 km long (6.2 miles) if longwave small form-factor pluggable (SFPs) are used. The SVC nodes themselves contain shortwave SFPs and must therefore be within 300 m (.186 miles) of the switch they are attached to. The configuration shown in Figure 5-2 on page 152 is therefore supported.

Chapter 5. Host configuration

151

7933 05 Host Configuration Christian.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 5-2 Example of host connectivity

In this figure, the optical distance between SVC Node 1 and Host 2 is just over 40 km. In order to avoid latencies leading to performance impact it is recommended, to avoid ISL hops whenever possible. That is, in an optimal setup the servers are connected to the same SAN switch as the SVC nodes. Remember these limits when connecting host servers to an SVC: Up to 256 hosts per I/O Group, which results in a total of 1,024 hosts per cluster. Note that if the same host is connected to multiple I/O Groups of a cluster, it counts as a host in each of these groups. A total of 512 distinct configured host worldwide port names (WWPNs) are supported per I/O Group. This limit is the sum of the FC host ports and the host iSCSI names (an internal WWPN is generated for each iSCSI name) associated with all of the hosts that are associated with a single I/O Group. Access from a server to an SVC cluster through the SAN fabric is defined by means of switch zoning. Consider these rules for zoning hosts with the SVC: Homogeneous HBA port zones Switch zones containing HBAs must contain HBAs from similar host types and similar HBAs in the same host. For example, AIX and NT hosts must be in separate zones and QLogic and Emulex adapters must also be in separate zones. Important: A configuration that breaches this rule is unsupported because it can introduce instability to the environment.

152

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 05 Host Configuration Christian.fm

HBA to SVC port zones Place each host HBA in a separate zone along with one or two SVC ports. If two ports then use one from each node in the I/O Group. Do not place more than two SVC ports in a zone with an HBA, because this will result in more than the recommended number of paths as seen from the host multipath driver. Recommended number of paths per volume: (n+1 redundancy) With 2 HBA ports: zone HBA ports to SVC ports 1 to 2 for a total of 4 paths With 4 HBA ports: zone HBA ports to SVC ports 1 to 1 for a total of 4 paths Optional: (n+2 redundancy) With 4 HBA ports: zone HBA ports to SVC ports 1 to 2 for a total of 8 paths Note: Here the term HBA port is used to describe the SCSI Initiator and SVC port is used to describe the SCSI target. Maximum host paths per LU For any volume, the number of paths through the SAN from the SVC nodes to a host must

not exceed eight. For most configurations, four paths to an I/O Group (four paths to each
volume that is provided by this I/O Group) are sufficient. Note: The maximum number of host paths per LU is not to exceed 8. Balanced Host Load across HBA ports To obtain the best performance from a host with multiple ports, ensure that each host port is zoned with a separate group of SVC ports. Balanced Host Load across SVC ports To obtain the best overall performance of the subsystem and to prevent overloading, the workload to each SVC port must be equal. You can achieve this balance by zoning approximately the same number of host ports to each SVC port. Figure 5-3 on page 154 shows an overview of a configuration where servers contain two single port HBAs each. Attempt to distribute the attached hosts equally between two logical sets per I/O Group. Connect hosts from each set to the same group of SVC ports. This port group includes exactly one port from each SVC node in the I/O Group. The zoning defines the correct connections. The port groups are defined as follows: Hosts in host set one of an I/O Group are always zoned to the P1 and P4 ports on both nodes, for example, N1/N2 of I/O Group zero. Hosts in host set two of an I/O Group are always zoned to the P2 and P3 ports on both nodes of an I/O Group. You can create aliases for these port groups (per I/O Group): Fabric A: IOGRP0_PG1 N1_P1;N2_P1,IOGRP0_PG2 N1_P3;N2_P3 Fabric B: IOGRP0_PG1 N1_P4;N2_P4,IOGRP0_PG2 N1_P2;N2_P2 Create host zones by always using the host port WWPN, plus the PG1 alias for hosts in the first host set. Always use the host port WWPN, plus the PG2 alias for hosts from the

Chapter 5. Host configuration

153

7933 05 Host Configuration Christian.fm

Draft Document for Review January 17, 2012 6:10 am

second host set. If a host has to be zoned to multiple I/O Groups, simply add the PG1 or PG2 aliases from the specific I/O Groups to the host zone. Using this schema provides four paths to one I/O Group for each host and helps to maintain an equal distribution of host connections on the SVC ports. Figure 5-3 shows an overview of this host zoning schema.

Figure 5-3 Overview of four-path host zoning

When possible, use the minimum number of paths necessary to achieve a sufficient level of redundancy. For SVC environment, no more than four paths per I/O Group are required to accomplish this. Remember that all paths must be managed by the multipath driver on the host side. If we assume a server is connected through four ports to the SVC, each volume is seen through eight paths. With 125 volumes mapped to this server, the multipath driver has to support handling up to 1000 active paths (8 x 125). You can find configuration and operational details about the IBM Subsystem Device Driver (SDD) Storage Multipath Subsystem Device Driver Users Guide, at the following website: http://ibm.com/support/docview.wss?uid=ssg1S7000303 For hosts using four HBAs/ports with eight connections to an I/O Group, use the zoning schema that is shown in Figure 5-4 on page 155. You can combine this schema with the previous four-path zoning schema.

154

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 05 Host Configuration Christian.fm

Figure 5-4 Overview of eight-path host zoning

5.2.2 Port mask


A port mask feature is available in SVC. The port mask is associated with a host object. The port mask controls which SVC (target) ports a particular host can access. By default, port masking is set such that all attached hosts can see the same set of SCSI logical unit numbers (LUNs) from each of the four FC ports on each node in the respective I/O Group. The port mask applies to logins from any of the host (initiator) ports associated with the host object in the configuration model. The port mask consists of four binary bits, represented in the command-line interface (CLI) as 0 or 1. The rightmost bit is associated with FC port 1 on each node. The leftmost bit is associated with port 4. A 1 in any particular bit position allows access to that port and a zero denies access. The default port mask is 1111, allowing access to all SVC node FC ports for that host object. From the GUI, you can use the port mask feature as shown in Figure 5-5 on page 156. For each login between an HBA port and an SVC node port, SVC allows access based on the port mask defined within the host object to which the HBA belongs. If access is denied, SVC responds to SCSI commands as though the HBA port is unknown to the SVC.

Chapter 5. Host configuration

155

7933 05 Host Configuration Christian.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 5-5 Create Host object - Port Mask

5.3 iSCSI
iSCSI is a block-level protocol that encapsulates SCSI commands into TCP/IP packets and, thereby, leverages an existing IP network instead of requiring FC HBAs and SAN fabric infrastructure. The iSCSI standard is defined by RFC 3720. iSCSI connectivity is a software feature that is provided by the SVC code. iSCSI attached hosts can either utilize a single network connection or multiple network connections. Important: Only host attachment to SVC via iSCSI is supported. SVC-to-storage connections are not supported. Each SVC node is equipped with two on-board ethernet network interface cards (NICs), capable of operating at a link speed of 10, 100 or 1000 Mbps. Both of these can be used to carry iSCSI traffic. Each nodes NIC numbered 1 is used as primary SVC cluster management port. For optimal performance achievement it as advisable to use a 1Gb ethernet connection between SVC and iSCSI attached hosts when using the SVC nodes on-board NICs. Starting with the SVC 2145-CG8, an optional 10 Gbps 2-port ethernet adapter (Feature Code #5700) is available. The required 10 Gbps shortwave SFPs are available as FC #5711. If the 10 GbE option is installed, no internal SSDs can be installed. The 10 GbE option is solely to be used for iSCSI traffic.

5.3.1 Initiators and targets


An iSCSI client, which is known as an (iSCSI) initiator, sends SCSI commands over an IP network to an iSCSI target. We refer to a single iSCSI initiator or iSCSI target as an iSCSI node. There are several types of iSCSI initiators that can be used in host systems: 156
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 05 Host Configuration Christian.fm

Software initiator: available for most operating systems, for example, AIX, Linux, Windows Hardware initiator: implemented as a network adapter with integrated iSCSI processing unit, also known as iSCSI HBA Supported operating systems for iSCSI host attachment as well as supported iSCSI HBAs can be found at the following web sites: SVC v6.3 Support Matrix http://ibm.com/support/docview.wss?uid=ssg1S1003907 SVC Information Center http://publib.boulder.ibm.com/infocenter/svc/ic/index.jsp An iSCSI target refers to a storage resource that is located on an iSCSI server, or, to be more precise, to one of potentially many instances of iSCSI nodes running on that server as a target.

5.3.2 iSCSI Nodes


There are one or more iSCSI nodes within a network entity. The iSCSI node is accessible through one or more network portals. A network portal is a component of a network entity that has a TCP/IP network address and that can be used by an iSCSI node. An iSCSI node is identified by its unique iSCSI name and is referred to as an IQN. Remember that this name serves only for the identification of the node; it is not the nodes address, and in iSCSI, the name is separated from the addresses. This separation allows multiple iSCSI nodes to use the same addresses, or, while it is implemented in the SVC, the same iSCSI node to use multiple addresses.

5.3.3 iSCSI Qualified Name (IQN)


An SVC cluster can provide up to eight iSCSI targets, one per node. Each SVC node has its own IQN, which by default will be in this form: iqn.1986-03.com.ibm:2145.<clustername>.<nodename> An iSCSI host in SVC is defined by specifying its iSCSI initiator names, for an example of an IQN of a Windows Servers iSCSI software initiator: iqn.1991-05.com.microsoft:itsoserver01 During the configuration of an iSCSI host in the SVC, you must specify the hosts initiator IQNs. You can read about host creation in detail in Chapter 9, SAN Volume Controller operations using the command-line interface on page 467, and in Chapter 10, SAN Volume Controller operations using the GUI on page 631. An alias string can also be associated with an iSCSI node. The alias allows an organization to associate a user friendly string with the iSCSI name. However, the alias string is not a substitute for the iSCSI name. Figure 5-6 on page 158 shows an overview of iSCSI implementation in the SVC.

Chapter 5. Host configuration

157

7933 05 Host Configuration Christian.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 5-6 SVC iSCSI overview

A host accessing SVC volumes via iSCSI connectivity utilizes one or more ethernet adapters or iSCSI HBAs to connect to the ethernet network. Both on-board ethernet ports of an SVC node can be configured for iSCSI. If iSCSI is used for host attachment, it is advisable to dedicate ethernet port one for SVC management and port two for iSCSI use. By doing so, port two can get connected to a separate network segment or VLAN for iSCSI, as SVC does not support the use of VLAN tagging to separate management and iSCSI traffic. Note that ethernet link aggregation (port trunking) or channel bonding for the SVC nodes ethernet ports is not supported for the 1 Gbps ports in this release. For each SVC node, that is, for each instance of an iSCSI target node in the SVC node, two IPv4 and two IPv6 addresses or iSCSI network portals can be defined.

5.3.4 iSCSI Setup for SVC and host server


The following basic procedure must be performed when setting up a host server for use as an iSCSI initiator with SAN Volume Controller volumes. The specific steps vary depending on the particular host type and operating system that is involved. To configure a host, first select a software-based iSCSI initiator or a hardware-based iSCSI initiator. For example, the software-based iSCSI initiator can be a Linux or Microsoft Windows iSCSI software initiator, and the hardware-based iSCSI initiator can be an iSCSI host bus adapter inside the host server.

158

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 05 Host Configuration Christian.fm

To set up your host server for use as an iSCSI software-based initiator with SAN Volume Controller volumes, perform the following steps (the CLI is used in this example): 1. Set up your SAN Volume Controller cluster for iSCSI. a. Select a set of IPv4 or IPv6 addresses for the ethernet ports on the nodes that are in the I/O groups that will use the iSCSI volumes. b. Configure the node ethernet ports on each SVC node in the clustered system with the svctask cfgportip command. c. Verify that you have configured the node and the clustered systems ethernet ports correctly by reviewing the output of the svcinfo lsportip command and svcinfo lssystemip command. d. Use the svctask mkvdisk command to create volumes on the SAN Volume Controller clustered system. e. Use the svctask mkhost command to create a host object on the SAN Volume Controller. It defines the hosts iSCSI initiator to which the volumes are to be mapped. f. Use the svctask mkvdiskhostmap command to map the volume to the host object in the SAN Volume Controller. 2. Set up your host server. a. Ensure that you have configured your IP interfaces on the server. b. Make sure your iSCSI HBA is ready to use or install the software for the iSCSI software-based initiator on the server if needed. c. On the host server, run the configuration methods for iSCSI so that the host server iSCSI initiator logs in to the SAN Volume Controller clustered system and discovers the SAN Volume Controller volumes. The host then creates host devices for the volumes. 3. After the host devices are created, you can use them with your host applications.

5.3.5 Volume discovery


Hosts can discover volumes through one of the following three mechanisms: Internet Storage Name Service (iSNS) SVC can register itself with an iSNS name server; the IP address of this server is set using the svctask chsystem command. A host can then query the iSNS server for available iSCSI targets. Service Location Protocol (SLP) The SVC node runs an SLP daemon, which responds to host requests. This daemon reports the available services on the node. One service is the CIMOM, which runs on the configuration node; iSCSI I/O service can now also be reported. SCSI Send Target request The host can also send a Send Target request using the iSCSI protocol to the iSCSI TCP/IP port (port 3260). You must define the network portal IP addresses of the iSCSI targets before a discovery can be started.

5.3.6 Authentication
Authentication of hosts is optional; by default, it is disabled. The user can choose to enable Challenge Handshake Authentication Protocol (CHAP) or CHAP authentication, which involves sharing a CHAP secret between the cluster and the host. If the correct key is not provided by the host, the SVC will not allow it to perform I/O to volumes. The cluster can also be assigned a CHAP secret.

Chapter 5. Host configuration

159

7933 05 Host Configuration Christian.fm

Draft Document for Review January 17, 2012 6:10 am

5.3.7 Target failover


A new feature with iSCSI is the option to move iSCSI target IP addresses, between SVC nodes in an I/O Group. IP addresses will only be moved from one node to its partner node if a node goes through a planned or unplanned restart. If the ethernet link to the SVC clustered system fails due to a cause outside of the SVC (such as the cable being disconnected, the ethernet router failing, and so on), the SVC makes no attempt to fail over an IP address to restore IP access to the cluster. To enable validation of the ethernet access to the nodes, it will respond to ping with the standard one-per-second rate without frame loss. There is a concept, which is used for handling the iSCSI IP address failover, that is called a

clustered ethernet port. A clustered ethernet port consists of one physical ethernet port on
each node in the cluster. The clustered ethernet port contains configuration settings that are shared by all of these ports. Figure 5-7 shows an example of an iSCSI target node failover. It gives a simplified overview of what happens during a planned or unplanned node restart in an SVC I/O Group. This example refers to SVC nodes with no optional 10GbE iSCSI adapter installed. 1. During normal operation, one iSCSI node target node instance is running on each SVC node. All of the IP addresses (IPv4/IPv6) belonging to this iSCSI target, including the management addresses if the node acts as the configuration node, are presented on the two ports (P1/P2) of a node. 2. During a restart of an SVC node (N1), the iSCSI initiator, including all of its network portal (IPv4/IPv6) IP addresses defined on Port1/Port2 and the management (IPv4/IPv6) IP addresses (if N1 acted as the configuration node), will fail over to Port1/Port2 of the partner node within the I/O Group, that is, node N2. An iSCSI initiator running on a server will execute a reconnect to its iSCSI target, that is, the same IP addresses presented now by a new node of the SVC cluster. 3. As soon as the node (N1) has finished its restart, the iSCSI target node (including its IP addresses) running on N2 will fail back to N1. Again, the iSCSI initiator running on a server will execute a reconnect to its iSCSI target. The management addresses will not fail back. N2 will remain in the role of the configuration node for this cluster.

160

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 05 Host Configuration Christian.fm

Figure 5-7 iSCSI node failover scenario

5.3.8 Host failover


From a host perspective, it is not required to have a multipathing driver (MPIO) in place to be able to handle an SVC node failover. In case of an SVC node restart, the host simply reconnects to the IP addresses of the iSCSI target node that will reappear after several seconds on the ports of the partner node. A host multipathing driver for iSCSI is required in these situations: To protect a host from network link failures, including port failures on the SVC nodes To protect a host from an HBA failure (if two HBAs are in use) To protect a host from network failures, if its connected through two HBAs to two separate networks To provide load balancing on the servers HBA and the network links The commands for the configuration of the iSCSI IP addresses have been separated from the configuration of the cluster IP addresses. The following commands are new commands for managing iSCSI IP addresses: The svcinfo lsportip command lists the iSCSI IP addresses assigned for each port on each node in the cluster. The svctask cfgportip command assigns an IP address to each nodes ethernet port for iSCSI I/O. The following commands are new commands for managing the cluster IP addresses: The svcinfo lssystemip command returns a list of the cluster management IP addresses configured for each port. The svctask chsystemip command modifies the IP configuration parameters for the cluster.

Chapter 5. Host configuration

161

7933 05 Host Configuration Christian.fm

Draft Document for Review January 17, 2012 6:10 am

For a detailed description about how to use these commands, see Chapter 9, SAN Volume Controller operations using the command-line interface on page 467. The parameters for remote services (ssh and Web services) will remain associated with the cluster object. During an SVC code upgrade the configuration settings for the clustered system will be applied to the node ethernet port 1. For iSCSI-based access, using redundant network connections, and separating iSCSI traffic by using a dedicated network or VLAN, will prevent any NIC, switch, or target port failure from compromising the host servers access to the volumes. As both on-board ethernet ports of an SVC node can be configured for iSCSI, it is advisable to dedicate ethernet port 1 for SVC management and port 2 for iSCSI usage. By doing so, port 2 can be connected to a dedicated network segment or VLAN for iSCSI. As SVC does not support the use of VLAN tagging to separate management and iSCSI traffic, it would be an option, to assign the according LAN switch port to a dedicated VLAN in order to separate SVC management and iSCSI traffic.

5.3.9 Additional sources of information


Further details on iSCSI implementation is available in these IBM Redbooks: IBM TotalStorage DS300 and DS400 Best Practices Guide, SG24-7121 http://www.redbooks.ibm.com/abstracts/sg247121.html?Open IBM System Storage DS3000: Introduction and Implementation Guide, SG24-7065 http://www.redbooks.ibm.com/abstracts/sg247065.html?Open

5.4 AIX-specific information


The following section details specific information that relates to the connection of AIX-based hosts in an SVC environment. AIX-specific information: In this section, the IBM System p information applies to all AIX hosts that are listed on the SVC interoperability support website, including IBM System i partitions and IBM JS blades.

5.4.1 Configuring the AIX host


The following list outlines the steps required to attach SVC volumes to an AIX host: 1. Install the HBAs in the AIX host system. 2. Ensure that you have installed the correct operating systems and version levels on your host, including any updates and Authorized Program Analysis Reports (APARs) for the operating system. 3. Connect the AIX host system to the FC switches. 4. Configure the FC switch zoning. 5. Install the 2145 host attachment support package, see also 5.4.5, Installing the 2145 host attachment support package on page 165 6. Install and configure the Subsystem Device Driver Path Control Module (SDDPCM).

162

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 05 Host Configuration Christian.fm

7. Perform the logical configuration on the SAN Volume Controller to define the host, volumes, and host mapping. 8. Run cfgmgr to discover and configure the SVC volumes. The following sections detail the current support information. It is vital that you check the websites that are listed regularly for any updates.

5.4.2 Operating system versions and maintenance levels


At the time of writing, SVC supports AIX levels from V4.3.3 through V7.1. The following AIX levels are supported AIX V4.3.3 AIX V5.1 AIX V5.2 AIX V5.3 AIX V6.1 AIX V7.1 For the latest information, and device driver support, always refer to the following website: http://ibm.com/systems/storage/software/virtualization/svc/interop.html

5.4.3 HBAs for IBM System p hosts


Ensure that your IBM System p AIX hosts contain supported host bus adapters (HBAs). Refer to the following website to obtain current interoperability information: http://ibm.com/systems/storage/software/virtualization/svc/interop.html Note: The maximum number of FC ports that are supported in a single host (or logical partition) is four. These ports can be four single-port adapters or two dual-port adapters or a combination, as long as the maximum number of ports that are attached to the SAN Volume Controller does not exceed four.

5.4.4 Configuring fast fail and dynamic tracking


tracking.
For hosts running AIX V5.2 or later operating systems, enable both fast fail and dynamic

Perform the following steps to configure your host system to use the fast fail and dynamic tracking attributes: 1. Issue the following command to set the FC SCSI I/O Controller Protocol Device to each adapter: chdev -l fscsi0 -a fc_err_recov=fast_fail The preceding command was for adapter fscsi0. Example 5-1 on page 163 shows the command for both adapters on our test system running AIX 5L V5.3.

Example 5-1 Enable fast fail

#chdev -l fscsi0 -a fc_err_recov=fast_fail fscsi0 changed

Chapter 5. Host configuration

163

7933 05 Host Configuration Christian.fm

Draft Document for Review January 17, 2012 6:10 am

#chdev -l fscsi1 -a fc_err_recov=fast_fail fscsi1 changed 2. Issue the following command to enable dynamic tracking for each FC device: chdev -l fscsi0 -a dyntrk=yes The preceding example command was for adapter fscsi0. Example 5-2 shows the command for both adapters on our test system running AIX 5L V5.3.
Example 5-2 Enable dynamic tracking

#chdev fscsi0 #chdev fscsi1

-l fscsi0 -a dyntrk=yes changed -l fscsi1 -a dyntrk=yes changed

Note: The fast fail and dynamic tracking attributes do not persist through an adapter delete and reconfigure. Thus, if the adapters are deleted and then configured back into the system, these attributes will be lost and will need to be reapplied.

Host adapter configuration settings


You can display the availability of installed host adapters by using the command shown in Example 5-3.
Example 5-3 FC host adapter availability

#lsdev -Cc adapter |grep fcs fcs0 Available 1Z-08 FC Adapter fcs1 Available 1D-08 FC Adapter

You can display the worldwide port number (WWPN), along with other attributes including firmware level, by using the command shown in Example 5-4. Note that the WWPN is represented as Network Address.
Example 5-4 FC host adapter settings and WWPN

#lscfg -vpl fcs0 fcs0

U0.1-P2-I4/Q1

FC Adapter

Part Number.................00P4494 EC Level....................A Serial Number...............1E3120A68D Manufacturer................001E Device Specific.(CC)........2765 FRU Number.................. 00P4495 Network Address.............10000000C932A7FB ROS Level and ID............02C03951 Device Specific.(Z0)........2002606D Device Specific.(Z1)........00000000 Device Specific.(Z2)........00000000 Device Specific.(Z3)........03000909 Device Specific.(Z4)........FF401210 Device Specific.(Z5)........02C03951 Device Specific.(Z6)........06433951

164

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 05 Host Configuration Christian.fm

Device Device Device Device Device Device

Specific.(Z7)........07433951 Specific.(Z8)........20000000C932A7FB Specific.(Z9)........CS3.91A1 Specific.(ZA)........C1D3.91A1 Specific.(ZB)........C2D3.91A1 Specific.(YL)........U0.1-P2-I4/Q1

PLATFORM SPECIFIC Name: fibre-channel Model: LP9002 Node: fibre-channel@1 Device Type: fcp Physical Location: U0.1-P2-I4/Q1

5.4.5 Installing the 2145 host attachment support package


To configure SVC volumes to an AIX host with the proper device type of 2145, you must have the 2145 host attachment support fileset installed prior to running cfgmgr. Running cfgmgr prior to installing the host attachment support fileset will result in the LUNs being configured as Other SCSI Disk Drives and will not be recognized by the SDDPCM. To correct the device type, hdisks will need to be deleted using rmdev -dl hdiskX and then cfgmgr will need to be rerun. Perform the following steps to install the host attachment support package: 1. Access the following website: http://www.ibm.com/servers/storage/support/software/sdd/downloading.html 2. Select Host Attachment for SDDPCM on AIX. 3. Download the appropriate host attachment package archive for your AIX version; the fileset contained in the package is devices.fcp.disk.ibm.mpio.rte 4. Follow the instructions that are provided on the website and the readme files to install the script.

5.4.6 Subsystem Device Driver Path Control Module


The Subsystem Device Driver Path Control Module (SDDPCM) is a loadable path control module for supported storage devices to supply path management functions and error recovery algorithms. When the supported storage devices are configured as Multipath I/O (MPIO) devices, SDDPCM is loaded as part of the AIX MPIO FCP (Fibre Channel Protocol) or AIX MPIO SAS (serial-attached SCSI) device driver during the configuration. The AIX MPIO device driver automatically discovers, configures and makes available all storage device paths. SDDPCM then manages these paths to provide: High availability and load balancing of storage I/O Automatic path-failover protection Concurrent download of supported storage devices licensed machine code Prevention of a single-point failure The AIX MPIO device driver along with SDDPCM enhances the data availability and I/O load balancing of SVC volumes.
Chapter 5. Host configuration

165

7933 05 Host Configuration Christian.fm

Draft Document for Review January 17, 2012 6:10 am

Note: For AIX hosts, use the Subsystem Device Driver Path Control Module (SDDPCM) as the multipath software over the legacy Subsystem Device Driver (SDD). Although still supported, a discussion of SDD is beyond the scope of this publication. For information regarding SDD, see Multipath Subsystem Device Driver Users Guide, GC52-1309.

SDDPCM installation
Download the appropriate version of SDDPCM and install using the standard AIX installation procedure. The latest SDDPCM software versions are available at the following website: http://ibm.com/support/entry/portal/Downloads/Hardware/System_Storage/Storage_soft ware/Other_software_products/System_Storage_Multipath_Subsystem_Device_Driver/ Check the driver readme file and make sure your AIX system meets all prerequisites. Example 5-5 shows the appropriate version of SDDPCM downloaded into the /tmp/sddpcm directory. From here, we extract it and initiate the inutoc command, which generates a dot.toc (.toc) file that is needed by the installp command prior to installing SDDPCM. Finally, we initiate the installp command, which installs SDDPCM onto this AIX host.
Example 5-5 Installing SDDPCM on AIX

# ls -l total 3232 -rw-r----1 root system 1648640 Jul 15 13:24 devices.sddpcm.61.rte.tar # tar -tvf devices.sddpcm.61.rte.tar -rw-r----- 271001 449628 1638400 Oct 31 12:16:23 2007 devices.sddpcm.61.rte # tar -xvf devices.sddpcm.61.rte.tar x devices.sddpcm.61.rte, 1638400 bytes, 3200 media blocks. # inutoc . # ls -l total 6432 -rw-r--r-1 root system 531 Jul 15 13:25 .toc -rw-r----1 271001 449628 1638400 Oct 31 2007 devices.sddpcm.61.rte -rw-r----1 root system 1648640 Jul 15 13:24 devices.sddpcm.61.rte.tar # installp -ac -d . all Example 5-6 shows the lslpp command that can be used to check the version of SDDPCM currently installed.
Example 5-6 Checking SDDPCM device driver

# lslpp -l | grep sddpcm devices.sddpcm.61.rte devices.sddpcm.61.rte

2.2.0.0 2.2.0.0

COMMITTED COMMITTED

IBM SDD PCM for AIX V61 IBM SDD PCM for AIX V61

Enabling the SDDPCM web interface is described in 5.12, Using SDDDSM, SDDPCM, and SDD web interface on page 223.

5.4.7 Configuring assigned volume using SDDPCM


We use an AIX host with host name Atlanta to demonstrate attaching SVC volumes to an AIX host. Example 5-7 shows host configuration prior to configuring SVC volumes. The lspv output shows existing hdisks and lsvg output shows existing Volume Group. 166
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 05 Host Configuration Christian.fm

Example 5-7 Status of AIX host system Atlanta

# lspv hdisk0 hdisk1 hdisk2 # lsvg rootvg

0009cdcaeb48d3a3 0009cdcac26dbb7c 0009cdcab5657239

rootvg rootvg rootvg

active active active

Identify WWPNs of host adapter ports


Example 5-8, shows how the lscfg commands can be used to list the WWPNs for all installed adapters. The WWPNs will be used later for mapping the SVC volumes.
Example 5-8 HBA information for host Atlantic

# lscfg -vl fcs* |egrep fcs|Network fcs1 U0.1-P2-I4/Q1 FC Adapter Network Address.............10000000C932A865 Physical Location: U0.1-P2-I4/Q1 fcs2 U0.1-P2-I5/Q1 FC Adapter Network Address.............10000000C94C8C1C

Display SVC configuration


The SVC CLI can be used to display the host configuration on the SVC and validate physical access from the host to the SVC. Example 5-9 shows the use of the lshost and lshostvdiskmap command to obtain the following information: 1. Confirmation that a host definition has been properly defined for host Atlantic. 2. The WWPNs listed in Example 5-8 are logged in with two logins each. 3. Atlantic has three volumes assigned to each, and volume serial numbers are listed.
Example 5-9 SVC definitions for host system Atlantic

IBM_2145:ITSO-CLS2:admin>svcinfo lshost Atlantic id 8 name Atlantic port_count 2 type generic mask 1111 iogrp_count 4 WWPN 10000000C94C8C1C node_logged_in_count 2 state active WWPN 10000000C932A865 node_logged_in_count 2 state active IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Atlantic id name SCSI_id vdisk_id wwpn vdisk_UID 8 Atlantic 0 14 10000000C94C8C1C 6005076801A180E90800000000000060

vdisk_name Atlantic0001

Chapter 5. Host configuration

167

7933 05 Host Configuration Christian.fm

Draft Document for Review January 17, 2012 6:10 am

8 Atlantic 1 22 10000000C94C8C1C 6005076801A180E90800000000000061 8 Atlantic 2 23 10000000C94C8C1C 6005076801A180E90800000000000062 IBM_2145:ITSO-CLS2:admin>

Atlantic0002 Atlantic0003

Discover and configure LUNs


The cfgmgr command performs the discovery of the new LUNs and configured them into AIX. The following command will probe devices on adapters individually: # cfgmgr -l fcs1 # cfgmgr -l fcs2 The following command will probe devices sequentially across all installed adapters: # cfgmgr -vS The lsdev command lists the three newly configured hdisks represented as MPIO FC 2145 devices, as shown in Example 5-10.
Example 5-10 Volumes from SVC

# lsdev -Cc disk hdisk0 Available hdisk1 Available hdisk2 Available hdisk3 Available hdisk4 Available hdisk5 Available

1S-08-00-8,0 1S-08-00-9,0 1S-08-00-10,0 1D-08-02 1D-08-02 1D-08-02

16 Bit LVD SCSI Disk Drive 16 Bit LVD SCSI Disk Drive 16 Bit LVD SCSI Disk Drive MPIO FC 2145 MPIO FC 2145 MPIO FC 2145

The mkvg command can now be used to create a Volume Group with the three newly configured hdisks, as shown in Example 5-11.
Example 5-11 Running the mkvg command

# mkvg -y itsoaixvg hdisk3 0516-1254 mkvg: Changing the PVID in the ODM. itsoaixvg # mkvg -y itsoaixvg1 hdisk4 0516-1254 mkvg: Changing the PVID in the ODM. itsoaixvg1 # mkvg -y itsoaixvg2 hdisk5 0516-1254 mkvg: Changing the PVID in the ODM. itsoaixvg2 The lspv output now shows the new Volume Group label on each of the hdisks that were included in the Volume Groups, as seen in Example 5-12.
Example 5-12 Showing the vpath assignment into the Volume Group

# lspv hdisk0 hdisk1 hdisk2 hdisk3 hdisk4 hdisk5 168

0009cdcaeb48d3a3 0009cdcac26dbb7c 0009cdcab5657239 0009cdca28b589f5 0009cdca28b87866 0009cdca28b8ad5b

rootvg rootvg rootvg itsoaixvg itsoaixvg1 itsoaixvg2

active active active active active active

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 05 Host Configuration Christian.fm

5.4.8 Using SDDPCM


The SDDPM is administered using the pcmpath command. This commands is used to perform all administrative functions such as displaying and changing the path state. The pcmpath query adapter command displays the current state of the adapters. In Example 5-13, we can see the status that both adapters are showing as optimal with State=NORMAL and Mode=ACTIVE.
Example 5-13 SDDPCM commands that are used to check the availability of the adapters

# pcmpath query adapter Active Adapters :2 Adpt# 0 1 Name fscsi1 fscsi2 State NORMAL NORMAL Mode ACTIVE ACTIVE Select 407 425 Errors 0 0 Paths 6 6 Active 6 6

The pcmpath query device command displays the current state of the adapters. In Example 5-14, we can see the path State and Mode for each of the defined hdisks. The status that both adapters are showing as optimal with State=NORMAL and Mode=ACTIVE. Additionally, an asterisk (*) displayed next to paths indicates inactive paths that are configured to the non-preferred SVC nodes in the IO Group.
Example 5-14 SDDPCM commands that are used to check the availability of the devices

# pcmpath query device Total Devices : 3 DEV#: 3 DEVICE NAME: hdisk3 TYPE: 2145 ALGORITHM: Load Balance SERIAL: 6005076801A180E90800000000000060 ========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi1/path0 OPEN NORMAL 152 0 1* fscsi1/path1 OPEN NORMAL 48 0 2* fscsi2/path2 OPEN NORMAL 48 0 3 fscsi2/path3 OPEN NORMAL 160 0 DEV#: 4 DEVICE NAME: hdisk4 TYPE: 2145 ALGORITHM: Load Balance SERIAL: 6005076801A180E90800000000000061 ========================================================================== Path# Adapter/Path Name State Mode Select Errors 0* fscsi1/path0 OPEN NORMAL 37 0 1 fscsi1/path1 OPEN NORMAL 66 0 2 fscsi2/path2 OPEN NORMAL 71 0 3* fscsi2/path3 OPEN NORMAL 38 0 DEV#: 5 DEVICE NAME: hdisk5 TYPE: 2145 ALGORITHM: Load Balance SERIAL: 6005076801A180E90800000000000062 ========================================================================== Path# Adapter/Path Name State Mode Select Errors
Chapter 5. Host configuration

169

7933 05 Host Configuration Christian.fm

Draft Document for Review January 17, 2012 6:10 am

0 1* 2* 3 #

fscsi1/path0 fscsi1/path1 fscsi2/path2 fscsi2/path3

OPEN OPEN OPEN OPEN

NORMAL NORMAL NORMAL NORMAL

66 38 38 70

0 0 0 0

5.4.9 Creating and preparing volumes for use with AIX V6.1 and SDDPCM
The itsoaixvg Volume Group is created using hdisk3. A logical volume is created using the Volume Group. Then, the testlv1 file system is created and mounted on the /testlv1 mount point, as shown in Example 5-15.
Example 5-15 Host system new Volume Group and file system configuration

# lsvg -o itsoaixvg2 itsoaixvg1 itsoaixvg rootvg # crfs -v jfs2 -g itsoaixvg -a size=3G File system created successfully. 3145428 kilobytes total disk space. New File System size is 6291456 # lsvg -l itsoaixvg itsoaixvg: LV NAME TYPE LPs loglv00 jfs2log 1 fslv00 jfs2 384 #

-m /itsoaixvg -p rw -a agblksize=4096

PPs 1 384

PVs 1 1

LV STATE closed/syncd closed/syncd

MOUNT POINT N/A /itsoaixvg

5.4.10 Expanding an AIX volume


AIX supports dynamic volume expansion starting at AIX 5L Version 5.2. This capability allows a volumes capacity to be increased by the storage subsystem while the volumes are actively in use by the host and applications. The following restrictions apply: The volume cannot belong to a concurrent-capable Volume Group. The volume cannot belong to FlashCopy, Metro Mirror, or Global Mirror relationship. The following steps outline how to expand a volume on an AIX host, where the volume is on the SVC: 1. Display the current size of the SVC volume using the SVC CLI command svcinfo lsvdisk <VDisk_name>. The capacity of the volume as seen by the host is displayed in the capacity field of the lsvdisk output in GBs. 2. The corresponding AIX hdisk can be identified by matching the vdisk_UID from the lsvdisk output with the SERIAL field of the pcmpath query device output. 3. Display the capacity currently configured in AIX using the lspv hdisk command. The capacity will be shown in the TOTAL PPs field in MBs. 4. To expand the capacity of the SVC volume, use the svctask expandvdisksize command.

170

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 05 Host Configuration Christian.fm

5. After the capacity of the volume has been expanded, AIX will need to update its configured capacity. To initiate the capacity update on AIX, use the chvg -g vg_name command, where vg_name is the Volume Group the expanded volume resides in. If AIX does not return any messages, it means that the command was successful and the volume changes in this Volume Group have been saved. If AIX cannot see any changes in the volumes, it will return an explanatory message. 6. Display the new AIX configured capacity using the lspv hdisk command, again the capacity will be shown in the TOTAL PPs field in MBs.

5.4.11 Running SVC commands from an AIX host system


To issue CLI commands, you must install and prepare the SSH client system on the AIX host system. For AIX 5L V5.1 and later, you can get OpenSSH from the Bonus Packs. You also need its prerequisite, OpenSSL, from the AIX toolbox for Linux applications for Power Systems. For AIX V4.3.3, the software is available from the AIX toolbox for Linux applications: http://ibm.com/systems/power/software/aix/linux/toolbox/download.html The AIX installation images from IBM developerWorks are available at this website: http://sourceforge.net/projects/openssh-aix Perform the following steps: 1. To generate the key files on AIX, issue the following command: ssh-keygen -t rsa -f filename The -t parameter specifies the type of key to generate: rsa1, rsa2, or dsa. The value for rsa2 is only rsa. For rsa1, the type must be rsa1. When creating the key to the SVC, use type rsa2. The -f parameter specifies the file names of the private and public keys on the AIX server (the public key gets the extension .pub after the file name). 2. Next, install the public key on the SVC by using the Master Console. Copy the public key to the Master Console and install the key to the SVC, as described in Chapter 4, SAN Volume Controller initial configuration on page 105. 3. On the AIX server, make sure that the private key and the public key are in the .ssh directory and in the home directory of the user. 4. To connect to the SVC and use a CLI session from the AIX host, issue the following command: ssh -l admin -i filename svc 5. You can also issue the commands directly on the AIX host, which is useful when making scripts. To do this, add the SVC commands to the previous command. For example, to list the hosts that are defined on the SVC, enter the following command: ssh -l admin -i filename svc svcinfo lshost In this command, -l admin is the username used to logon to the SVC, -i filename is the filename of the private key generated, and svc is the host name or IP address of the SVC.

5.5 Windows-specific information


In the following sections, we detail specific information about the connection of Windows based hosts to the SVC environment.

Chapter 5. Host configuration

171

7933 05 Host Configuration Christian.fm

Draft Document for Review January 17, 2012 6:10 am

5.5.1 Configuring Windows Server 2003, 2008, 2008 R2 hosts


This section provides an overview of the requirements for attaching the SVC to a host running Windows Server 2003, Windows Server 2008, or Windows Server 2008 R2. In order to make the Windows server capable to handle volumes presented by SVC, a multi-path driver has to be installed: the IBM Subsystem Device Driver Device Specific Module (SDDDSM). Before you attach the SVC to your host, make sure that all of the following requirements are fulfilled: Check all prerequisites provided in section 2.0 of the SDDSM README file Check LUN limitations for your host system. Ensure that there are enough FC adapters installed in the server to handle the total LUNs that you want to attach.

5.5.2 Configuring Windows


To configure the Windows hosts, follow these steps: 1. Make sure that the latest OS service pack and Hotfixes are applied to your Microsoft Windows server system. 2. Use the latest supported firmware and driver levels on your host system. 3. Install the HBA or HBAs on the Windows server, as shown in 5.5.4, Host adapter installation and configuration on page 173. 4. Connect the Windows Server FC host adapters to the switches. 5. Configure the switches (zoning). 6. Install the FC host adapter driver, as described in 5.5.3, Hardware lists, device driver, HBAs, and firmware levels on page 172. 7. Configure the HBA for hosts running Windows, as described in 5.5.4, Host adapter installation and configuration on page 173. 8. Check the HBA driver readme file for the required Windows registry settings, as described in 5.5.3, Hardware lists, device driver, HBAs, and firmware levels on page 172. 9. Check the disk timeout on Microsoft Windows Server, as described in 5.5.5, Changing the disk timeout on Microsoft Windows Server on page 173. 10.Install and configure SDDDSM. 11.Restart the Windows Server host system. 12.Configure the host, volumes, and host mapping in the SVC. 13.Use Rescan disk in Computer Management of the Windows server to discover the volumes that were created on the SAN Volume Controller.

5.5.3 Hardware lists, device driver, HBAs, and firmware levels


The latest information about supported hardware, device driver, and firmware is available at this website: http://ibm.com/systems/storage/software/virtualization/svc/interop.html On this page, browse to V6.3.x section, select the Supported Hardware, Device Driver, Firmware and Recommended Software Levels link and then search for Windows. At this Web site, you will also find the hardware list for supported HBAs and the driver levels for Windows. Check the supported firmware and driver level for your HBA and follow the manufacturers instructions to upgrade the firmware and driver levels for each type of HBA. In 172
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 05 Host Configuration Christian.fm

most manufacturers driver readme files, you will find instructions for the Windows registry parameters that have to be set for the HBA driver.

5.5.4 Host adapter installation and configuration


Install the host adapters into your system. Refer to the manufacturers instructions for installation and configuration of the HBAs. Also, check the documentation provided for the server system for installation guidelines of FC HBAs regarding installation in certain PCI(e) slots and the like. Detailed configuration settings to be made for the different vendors FC HBAs are available in the SVC Information Center in the Section Installing > Host attachment > Fibre Channel host attachments > Hosts running the Microsoft Windows Server operating system.

5.5.5 Changing the disk timeout on Microsoft Windows Server


This section describes how to change the disk I/O timeout value on Windows Server 2003, Windows Server 2008, and Windows Server 2008 R2 systems. On your Windows server hosts, change the disk I/O timeout value to 60 in the Windows registry: 1. In Windows, click Start, and select Run. 2. In the dialog text box, type regedit and press Enter. 3. In the registry browsing tool, locate the HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Disk\TimeOutValue key. 4. Confirm that the value for the key is 60 (decimal value), and, if necessary, change the value to 60, as shown in Figure 5-8.

Figure 5-8 Regedit

5.5.6 Installing the SDDDSM multipath-driver on Windows


The following section shows how to install the SDDDSM driver on a Windows Server 2008 R2 host.

Windows Server 2003, Windows Server 2008 (R2) and MPIO


Microsoft Multi Path Input Output (MPIO) is a generic multi-path driver provided by Microsoft, which, by itself, does not form a complete solution. It works in conjunction with device-specific modules (DSM), usually provided by the vendor of the storage subsystem. This design allows

Chapter 5. Host configuration

173

7933 05 Host Configuration Christian.fm

Draft Document for Review January 17, 2012 6:10 am

the parallel operation of multiple vendors storage systems on the same host without interfering each other, as the MPIO instance only interacts with that storage system the DSM is provided for. MPIO is not installed with the Windows operating system by default. Instead, storage vendors must pack the MPIO drivers with their own DSM. IBM Subsystem Device Driver DSM (SDDDSM) is the IBM multipath I/O solution that is based on Microsoft MPIO technology. It is a device-specific module specifically designed to support IBM storage devices on Windows Server 2003 and Windows Server 2008 (R2) servers. The intention of MPIO is to achieve better integration of multipath storage with the operating system. It also allows the use of multi-pathing in the SAN infrastructure during the boot process for SAN boot hosts.

Subsystem Device Driver Device Specific Module for SVC


Subsystem Device Driver Device Specific Module (SDDDSM) installation is a package for the SVC device for the Windows Server 2003 and Windows Server 2008 (R2) operating systems. Together with MPIO, it is designed to support the multipath configuration environments in the IBM System Storage SAN Volume Controller. It resides in a host system along with the native disk device driver and provides the following functions: Enhanced data availability Dynamic I/O load-balancing across multiple paths Automatic path failover protection Enables concurrent firmware upgrade for the storage system Path-selection policies for the host system No SDDDSM support for Windows Server 2000, as SDDDSM requires the STORPORT version of HBA device drivers Table 5-1 lists the SDDDSM driver levels that are supported at the time of writing.
Table 5-1 Currently supported SDDDSM driver levels Windows operating system Windows Server 2003 SP2 (32-bit)/Windows Server 2003 SP2 (x64) Windows Server 2008 (32-bit)/Windows Server 2008 (x64) Windows Server 2008 R2 (x64) SDD level 2.4.3.1-2 2.4.3.1-2 2.4.3.1-2

To check which levels are available, go to the website: http://ibm.com/support/docview.wss?uid=ssg1S7001350#WindowsSDDDSM To download SDDDSM, go to the website: http://ibm.com/support/docview.wss?uid=ssg1S4000350#SVC After you have downloaded the appropriate archive (zip file) from the URL above, extract it to your local hard drive and launch setup.exe to install SDDDSM. A command prompt window will appear, as shown in Figure 5-9. Confirm the installation by entering Y.

174

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 05 Host Configuration Christian.fm

Figure 5-9 SDDDSM installation

After the setup has completed, enter Y again to confirm the reboot request, shown in Figure 5-10

Figure 5-10 Reboot system after installation

After the reboot, the SDDDSM installation is complete. You can verify the installation completion in Device Manager, because the SDDDSM device will appear (Figure 5-11 on page 175), and the SDDDSM tools will have been installed (Figure 5-12 on page 176).

Figure 5-11 SDDDSM installation

The SDDDSM tools have been installed (Figure 5-12).

Chapter 5. Host configuration

175

7933 05 Host Configuration Christian.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 5-12 SDDDSM installation

5.5.7 Attaching SVC volumes to Windows Server 2008 R2


Create the volumes on the SVC and map them to the Windows Server 2008 R2 host. In this example, we have mapped three SVC disks to the Windows Server 2008 R2 host named Diomede; see Example 5-16.
Example 5-16 SVC host mapping to host Diomede

IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Diomede id name SCSI_id vdisk_id vdisk_name wwpn 0 Diomede 0 20 Diomede_0001 210000E08B0541BC 6005076801A180E9080000000000002B 0 Diomede 1 21 Diomede_0002 210000E08B0541BC 6005076801A180E9080000000000002C 0 Diomede 2 22 Diomede_0003 210000E08B0541BC 6005076801A180E9080000000000002D

vdisk_UID

Perform the following steps to use the devices on your Windows Server 2008 R2 host: 1. Click Start, and click Run. 2. Enter the diskmgmt.msc command, and click OK. The Disk Management window opens. 3. Select Action, and click Rescan Disks (Figure 5-13).

176

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 05 Host Configuration Christian.fm

Figure 5-13 Windows Server 2008 R2: Rescan disks

4. The SVC disks will now appear in the Disk Management window (Figure 5-14 on page 177).

Figure 5-14 Windows Server 2008 R2 Disk Management window

After you have assigned the SVC disks, they are also available in Device Manager. The three assigned drives are represented by SDDDSM/MPIO as IBM-2145 Multipath disk devices in the Device Manager (Figure 5-15).

Chapter 5. Host configuration

177

7933 05 Host Configuration Christian.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 5-15 Windows Server 2008 R2 Device Manager

5. To check that the disks are available, select Start All Programs Subsystem Device Driver DSM, and click Subsystem Device Driver DSM (Figure 5-16). The SDDDSM Command Line Utility will appear.

Figure 5-16 Windows Server 2008 R2 Subsystem Device Driver DSM utility

6. Enter the datapath query device command and press Enter (Example 5-17). This command will display all of the disks and the available paths, including their states.
Example 5-17 Windows Server 2008 R2 SDDDSM command-line utility

Microsoft Windows [Version 6.0.6001] Copyright (c) 2006 Microsoft Corporation. 178
IBM System Storage SAN Volume Controller V6.3

All rights reserved.

Draft Document for Review January 17, 2012 6:10 am

7933 05 Host Configuration Christian.fm

C:\Program Files\IBM\SDDDSM>datapath query device Total Devices : 3

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E9080000000000002B ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0 1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 1429 0 2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 1456 0 3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E9080000000000002C ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 1520 0 1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 0 0 2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0 3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 1517 0 DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E9080000000000002D ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 27 0 1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 1396 0 2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 1459 0 3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0 C:\Program Files\IBM\SDDDSM> SAN zoning: When following the SAN zoning guidance, we get this result, using one volume and a host with two HBAs, (number of volumes) x (number of paths per I/O Group per HBA) x (number of HBAs) = 1 x 2 x 2 = four paths. 7. Right-click the disk in Disk Management, and select Online to place the disk online (Figure 5-17).

Chapter 5. Host configuration

179

7933 05 Host Configuration Christian.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 5-17 Windows Server 2008 R2: Place disk online

8. Repeat step 7 for all of your attached SVC disks. 9. Right-click one disk again, and select Initialize Disk (Figure 5-18).

Figure 5-18 Windows Server 2008 R2: Initialize Disk

10.Mark all of the disks that you want to initialize, and click OK (Figure 5-19).

Figure 5-19 Windows Server 2008 R2: Initialize Disk

11.Right-click the unallocated disk space, and select New Simple Volume (Figure 5-20).

180

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 05 Host Configuration Christian.fm

Figure 5-20 Windows Server 2008 R2: New Simple Volume

12.The New Simple Volume Wizard window opens. Click Next. 13.Enter a disk size, and click Next (Figure 5-21).

Figure 5-21 Windows Server 2008 R2: New Simple Volume

14.Assign a drive letter, and click Next (Figure 5-22).

Figure 5-22 Windows Server 2008 R2: New Simple Volume Chapter 5. Host configuration

181

7933 05 Host Configuration Christian.fm

Draft Document for Review January 17, 2012 6:10 am

15.Enter a volume label, and click Next (Figure 5-23).

Figure 5-23 Windows Server 2008 R2: New Simple Volume

16.Click Finish, and repeat this step for every SVC disk on your host system (Figure 5-24).

Figure 5-24 Windows Server 2008 R2: Disk Management

5.5.8 Extending a Windows Server 2008 (R2) volume


Using SVC and Windows Server 2008 (R2) gives you the ability to extend volumes while they are in use. You can expand a volume in the SVC cluster, even if it is mapped to a host. Certain operating systems, such as Windows Server since version 2000, can handle the volumes being expanded even if the host has applications running.

182

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 05 Host Configuration Christian.fm

A volume, that is defined to be in a FlashCopy, Metro Mirror, or Global Mirror mapping on the SVC cannot be expanded unless the host mapping is removed. This means that the FlashCopy, Metro Mirror, or Global Mirror on that volume has to be stopped before it is possible to expand the volume. Important: If you want to expand a logical drive in a extended partition in Windows Server 2003, apply the Hotfix from KB841650, which is available from the Microsoft Knowledge Base at this website: http://support.microsoft.com/kb/841650/ Use the updated Diskpart version for Windows Server 2003, which is available from the Microsoft Knowledge Base at this website: http://support.microsoft.com/kb/923076/ If the volume is part of a Microsoft Cluster (MSCS), Microsoft recommends to shut down all but one MSCS cluster nodes. Applications in the resource accessing the volume to be expanded should be stopped as well before expanding the volume. Applications running in other resources can continue to run. After expanding the volume, start the application and the resource, and then restart the other nodes in the MSCS. To expand a volume in use on a Windows Server host, the Windows DiskPart utility will be used. To start DiskPart, select Start Run, and enter DiskPart. Diskpart was developed by Microsoft to ease administration of storage on Windows hosts. It is a command-line interface which you can use to manage disks, partitions, and volumes by using scripts or direct input on the command line. You can list disks and volumes, select them, and after selecting them, get more detailed information, create partitions, extend volumes, and more. For more information on diskpart, see the Microsoft website: http://www.microsoft.com Further information on expanding partitions of a cluster shared disk is available at the following website: http://support.microsoft.com/kb/304736 An example of how to expand a volume on a Windows Server 2003 host, where the volume is a volume from the SVC, is shown in the following discussion. To list a volume size, use the svcinfo lsvdisk <VDisk_name> command. This command gives this information for the Senegal_bas0001 before expanding the volume. Here, we can see that the capacity is 10 GB, and also what the vdisk_UID is. To find on what vpath this volume is on the Windows Server 2003 host, we use the datapath query device SDD command on the Windows host (Figure 5-25). We can see that the serial 6005076801A180E9080000000000000F of Disk1 on the Windows host (Figure 5-25) matches the volume ID of Senegal_bas0001. To see the size of the volume on the Windows host we use Disk Manager, as shown in Figure 5-25.

Chapter 5. Host configuration

183

7933 05 Host Configuration Christian.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 5-25 Windows Server 2003: Disk Management

This window shows that the volume size is 10 GB. To expand the volume on the SVC, we use the svctask expandvdisksize command to increase the capacity on the volume. In this example, we expand the volume by 1 GB (Example 5-18).
Example 5-18 svctask expandvdisksize command

IBM_2145:ITSO-CLS2:admin>svctask expandvdisksize -size 1 -unit gb Senegal_bas0001 IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk Senegal_bas0001 id 7 name Senegal_bas0001 IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name MDG_0_DS45 capacity 11.0GB type striped formatted yes mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 6005076801A180E9080000000000000F throttling 0 preferred_node_id 3 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 184

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 05 Host Configuration Christian.fm

copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_0_DS45 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 11.00GB real_capacity 11.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize To check that the volume has been expanded, we use the svcinfo lsvdisk command. In Example 5-18, we can see that the Senegal_bas0001 volume has been expanded to 11 GB in capacity. After performing a Disk Rescan in Windows, you will see the new unallocated space in Windows Disk Management, as shown in Figure 5-26.

Figure 5-26 Expanded volume in Disk Manager

This window shows that Disk1 now has 1 GB unallocated new capacity. To make this capacity available for the file system, use the following commands, as shown in Example 5-19. diskpart list volume select volume Starts DiskPart in a DOS prompt Shows you all available volumes Selects the volume to expand

Chapter 5. Host configuration

185

7933 05 Host Configuration Christian.fm

Draft Document for Review January 17, 2012 6:10 am

detail volume extend

Displays details for the selected volume, including the unallocated capacity Extends the volume to the available unallocated space

Example 5-19 Using diskpart

C:\>diskpart Microsoft DiskPart version 5.2.3790.3959 Copyright (C) 1999-2001 Microsoft Corporation. On computer: SENEGAL DISKPART> list volume Volume ### ---------Volume 0 Volume 1 Volume 2 Ltr --C S D Label ----------SVC_Senegal Fs ----NTFS NTFS Type ---------Partition Partition DVD-ROM Size ------75 GB 10 GB 0 B Status --------Healthy Healthy Healthy Info -------System

DISKPART> select volume 1 Volume 1 is the selected volume. DISKPART> detail volume Disk ### -------* Disk 1 Status ---------Online Size ------11 GB Free ------1020 MB Dyn --Gpt ---

Readonly : No Hidden : No No Default Drive Letter: No Shadow Copy : No DISKPART> extend DiskPart successfully extended the volume. DISKPART> detail volume Disk ### -------* Disk 1 Status ---------Online Size ------11 GB Free ------0 B Dyn --Gpt ---

Readonly : No Hidden : No No Default Drive Letter: No Shadow Copy : No After extending the volume, the detail volume command shows that there is no free capacity on the volume anymore. The list volume command shows the file system size. The Disk Management window also shows the new disk size; see Figure 5-27.

186

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 05 Host Configuration Christian.fm

Figure 5-27 Disk Management after extending disk

The example here is referred to as a Windows Basic Disk. Dynamic disks can be expanded by expanding the underlying SVC volume. The new space will appear as unallocated space at the end of the disk. In this case, you do not need to use the DiskPart tool. Instead, you can use Windows Disk Management functions to allocate the new space. Expansion works irrespective of the volume type (simple, spanned, mirrored, and so on) on the disk. Dynamic disks can be expanded without stopping I/O in most cases. Important: Never try to upgrade your Basic Disk to Dynamic Disk or vice versa without backing up your data, because this operation is disruptive for the data due to a change in the position of the logical block address (LBA) on the disks.

5.5.9 Removing a disk on Windows


To remove a disk from Windows, and the disk is an SVC volume, we follow the standard Windows procedure to make sure that there is no data that we want to preserve on the disk, that no applications are using the disk, and that no I/O is going to the disk. After completing this procedure, we remove the host mapping on the SVC. We must make sure that we are removing the correct volume. To verify, we use SDD to find the serial number for the disk, and on the SVC, we use lshostvdiskmap to find the volume name and number. We also check that the SDD Serial number on the host matches the UID on the SVC for the volume. When the host mapping is removed, we perform a rescan for the disk, Disk Management on the server removes the disk, and the vpath goes into the status of CLOSE on the server. We can verify these actions by using the datapath query device SDD command, but the vpath that is closed will first be removed after a reboot of the server. In the following sequence of examples, we show how to remove an SVC volume from a Windows server. We show it on a Windows Server 2003 operating system, but the steps also apply to Windows Server 2000 and Windows Server 2008.

Chapter 5. Host configuration

187

7933 05 Host Configuration Christian.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 5-25 on page 184 shows the Disk Manager before removing the disk. We will remove Disk 1. To find the correct volume information, we find the Serial/UID number using SDD (Example 5-20).
Example 5-20 Removing SVC disk from the Windows server

C:\Program Files\IBM\SDDDSM>datapath query device Total Devices : 3

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E9080000000000000F ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 1471 0 1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0 2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0 3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 1324 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E90800000000000010 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 20 0 1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 94 0 2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 55 0 3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0 DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E90800000000000011 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 100 0 1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 0 0 2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0 3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 69 0

Knowing the Serial/UID of the volume and the host name Senegal, we find the host mapping to remove by using the lshostvdiskmap command on the SVC, and then we remove the actual host mapping (Example 5-21).
Example 5-21 Finding and removing the host mapping

IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Senegal id name SCSI_id vdisk_id vdisk_name wwpn 1 Senegal 0 7 Senegal_bas0001 210000E08B89B9C0 6005076801A180E9080000000000000F 1 Senegal 1 8 Senegal_bas0002 210000E08B89B9C0 6005076801A180E90800000000000010 1 Senegal 2 9 Senegal_bas0003 210000E08B89B9C0 6005076801A180E90800000000000011

vdisk_UID

IBM_2145:ITSO-CLS2:admin>svctask rmvdiskhostmap -host Senegal Senegal_bas0001 188

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 05 Host Configuration Christian.fm

IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Senegal id name SCSI_id vdisk_id vdisk_name wwpn 1 Senegal 1 8 Senegal_bas0002 210000E08B89B9C0 6005076801A180E90800000000000010 1 Senegal 2 9 Senegal_bas0003 210000E08B89B9C0 6005076801A180E90800000000000011

vdisk_UID

Here, we can see that the volume is removed from the server. On the server, we then perform a disk rescan in Disk Management, and we now see that the correct disk (Disk1) has been removed, as shown in Figure 5-28.

Figure 5-28 Disk Management: Disk has been removed

SDDDSM also shows us that the status for all paths to Disk1 has changed to CLOSE, because the disk is not available (Example 5-22 on page 190).

Chapter 5. Host configuration

189

7933 05 Host Configuration Christian.fm

Draft Document for Review January 17, 2012 6:10 am

Example 5-22 SDD: Closed path

C:\Program Files\IBM\SDDDSM>datapath query device Total Devices : 3

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E9080000000000000F ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 CLOSE NORMAL 1471 0 1 Scsi Port2 Bus0/Disk1 Part0 CLOSE NORMAL 0 0 2 Scsi Port3 Bus0/Disk1 Part0 CLOSE NORMAL 0 0 3 Scsi Port3 Bus0/Disk1 Part0 CLOSE NORMAL 1324 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E90800000000000010 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 20 0 1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 124 0 2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 72 0 3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0 DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E90800000000000011 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 134 0 1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 0 0 2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0 3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 82 0 The disk (Disk1) is now removed from the server. However, to remove the SDDDSM information of the disk, the server has to be rebooted at a convenient time.

5.6 Using the SVC CLI from a Windows host


To issue CLI commands, we must install and prepare the SSH client system on the Windows host system. We can install the PuTTY SSH client software on a Windows host by using the PuTTY installation program. PuTTY can be downloaded from the following website: http://www.chiark.greenend.org.uk/~sgtatham/putty/ The following website offers SSH client alternatives for Windows: http://www.openssh.com/windows.html Cygwin software has an option to install an OpenSSH client. Cygwin can be downloaded from the following website: http://www.cygwin.com/

190

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 05 Host Configuration Christian.fm

For more information about the CLI, see Chapter 9, SAN Volume Controller operations using the command-line interface on page 467.

5.7 Microsoft Volume Shadow Copy


The SVC provides support for the Microsoft Volume Shadow Copy Service (VSS). The Microsoft Volume Shadow Copy Service can provide a point-in-time (shadow) copy of a Windows host volume while the volume is mounted and the files are in use. In this section, we discuss how to install the Microsoft Volume Copy Shadow Service. The following operating system versions are supported: Windows Server 2003 with SP2 (x86 & x86_64) Windows Server 2008 with SP2 (x86 & x86_64) Windows Server 2008 R2 with SP1 The following components are used to provide support for the service: SAN Volume Controller IBM System Storage hardware provider, known as the IBM System Storage Support for Microsoft Volume Shadow Copy Service (IBMVSS) Microsoft Volume Shadow Copy Service IBMVSS is installed on the Windows host. To provide the point-in-time shadow copy, the components complete the following process: 1. A backup application on the Windows host initiates a snapshot backup. 2. The Volume Shadow Copy Service notifies IBMVSS that a copy is needed. 3. The SAN Volume Controller prepares the volume for a snapshot. 4. The Volume Shadow Copy Service quiesces the software applications that are writing data on the host and flushes file system buffers to prepare for a copy. 5. The SAN Volume Controller creates the shadow copy using the FlashCopy Service. 6. The Volume Shadow Copy Service notifies the writing applications that I/O operations can resume and notifies the backup application that the backup was successful. The Volume Shadow Copy Service maintains a free pool of volumes for use as a FlashCopy target and a reserved pool of volumes. These pools are implemented as virtual host systems on the SAN Volume Controller.

5.7.1 Installation overview


The steps for implementing IBMVSS must be completed in the correct sequence. Before you begin, you must have experience with, or knowledge of, administering a Windows operating system. And you must also have experience with, or knowledge of, administering a SAN Volume Controller. You will need to complete the following tasks: Verify that the system requirements are met. Install IBMVSS. Verify the installation.
Chapter 5. Host configuration

191

7933 05 Host Configuration Christian.fm

Draft Document for Review January 17, 2012 6:10 am

Create a free pool of volumes and a reserved pool of volumes on the SAN Volume Controller.

5.7.2 System requirements for the IBM System Storage hardware provider
Ensure that your system satisfies the following requirements before you install IBMVSS and Virtual Disk Service software on the Windows operating system: SAN Volume Controller with FlashCopy enabled. IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service (VDS) software.

5.7.3 Installing the IBM System Storage hardware provider


This section includes the steps to install the IBM System Storage hardware provider on a Windows server. You must satisfy all of the system requirements before starting the installation. During the installation, you will be prompted to enter information about the SAN Volume Controller Master Console, including the location of the truststore file. The truststore file is generated during the installation of the Master Console. You must copy this file to a location that is accessible to the IBM System Storage hardware provider on the Windows server. When the installation is complete, the installation program might prompt you to restart the system. Complete the following steps to install the IBM System Storage hardware provider on the Windows server: 1. Download the installation archive from the IBM URL below and extract it to a director on the Windows server you want to install IBMVSS on. http://ibm.com/support/docview.wss?uid=ssg1S4000833 2. Log on to the Windows server as an administrator, and navigate to the directory where the installation files are located. 3. Run the installation program by double-clicking IBMVSSVDS.exe. 4. The Welcome window opens, as shown in Figure 5-29 on page 192. Click Next to continue with the installation.
u

Figure 5-29 IBM VSS/VSD installation - Welcome

192

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 05 Host Configuration Christian.fm

5. Accept the license agreement on the next screen, then the Choose Destination Location window opens (Figure 5-30). Click Next to accept the default directory where the setup program will install the files, or click Change to select another directory. Click Next.

Figure 5-30 IBM VSS/VSD installation - Choose Destination

6. Click Install to begin the installation (Figure 5-31).

Figure 5-31 IBM VSS/VSD installation - Install

7. The next window is asking to select a CIM server, that is the SVC. Unlike for older SVC versions the config node is providing the CIM service on the cluster IP address. Select either the correct one of the automatically discovered CIM servers, or select Enter the CIM Server address manually, and click Next (Figure 5-32 on page 194).

Chapter 5. Host configuration

193

7933 05 Host Configuration Christian.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 5-32 IBM VSS/VSD installation - Select CIM Server

8. The Enter CIM Server Details window opens. Enter the following information in the fields (Figure 5-33): a. The CIM Server Address field is propagated with the URL according to the CIM Server address chosen in the previous step. b. In the CIM User field, type the user name that the IBMVSS software will use to gain access to the SVC. c. In the CIM Password field, type the password for the SVC user name provided in the previous step and click Next.

Figure 5-33 IBM VSS/VSD installation - CIM Server Details

9. In the next window, click Finish. If necessary, the InstallShield Wizard prompts you to restart the system (Figure 5-34 on page 195).

194

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 05 Host Configuration Christian.fm

Figure 5-34 IBM VSS/VSD installation complete

Additional information: If these settings change after installation, you can use the ibmvcfg.exe tool to update the Microsoft Volume Shadow Copy and Virtual Disk Services software with the new settings. If you do not have the CIM Agent server, port, or user information, contact your CIM Agent administrator.

5.7.4 Verifying the installation


Perform the following steps to verify the installation: 1. Select Start All Programs Administrative Tools Services from the Windows server start menu. 2. Ensure that the service named IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software appears and that Status is set to Started and that Startup Type is set to Automatic. 3. Open a command prompt window, and issue the following command: vssadmin list providers This command ensures that the service named IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software is listed as a provider; see Example 5-23.
Example 5-23 Microsoft Software Shadow copy provider

C:\Users\Administrator>vssadmin list providers vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool (C) Copyright 2001-2005 Microsoft Corp. Provider name: 'Microsoft Software Shadow Copy provider 1.0' Provider type: System Provider Id: {b5946137-7b9f-4925-af80-51abd60b20d5} Version: 1.0.0.7

Chapter 5. Host configuration

195

7933 05 Host Configuration Christian.fm

Draft Document for Review January 17, 2012 6:10 am

Provider name: 'IBM System Storage Volume Shadow Copy Service Hardware Provider' Provider type: Hardware Provider Id: {d90dd826-87cf-42ce-a88d-b32caa82025b} Version: 4.2.1.0816 If you are able to successfully perform all of these verification tasks, it means that the IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software was successfully installed on the Windows server.

5.7.5 Creating the free and reserved pools of volumes


The IBM System Storage hardware provider maintains a free pool of volumes and a reserved pool of volumes. Because these objects do not exist on the SAN Volume Controller, the free pool of volumes and the reserved pool of volumes are implemented as virtual host systems. You must define these two virtual host systems on the SAN Volume Controller. When a shadow copy is created, the IBM System Storage hardware provider selects a volume in the free pool, assigns it to the reserved pool, and then removes it from the free pool. This process protects the volume from being overwritten by other Volume Shadow Copy Service users. To successfully perform a Volume Shadow Copy Service operation, there must be enough volumes mapped to the free pool. The volumes must be the same size as the source volumes. Use the SAN Volume Controller Console or the SAN Volume Controller command-line interface (CLI) to perform the following steps: 1. Create a host for the free pool of volumes. You can use the default name VSS_FREE or specify another name. Associate the host with the worldwide port name (WWPN) 5000000000000000 (15 zeroes); see Example 5-24.
Example 5-24 Creating an mkhost for the free pool

IBM_2145:ITSO-CLS2:admin>svctask mkhost -name VSS_FREE -hbawwpn 5000000000000000 -force Host, id [2], successfully created 2. Create a virtual host for the reserved pool of volumes. You can use the default name VSS_RESERVED or specify another name. Associate the host with the WWPN 5000000000000001 (14 zeroes); see Example 5-25.
Example 5-25 Creating an mkhost for the reserved pool

IBM_2145:ITSO-CLS2:admin>svctask mkhost -name VSS_RESERVED -hbawwpn 5000000000000001 -force Host, id [3], successfully created 3. Map the logical units (volumes) to the free pool of volumes. The volumes cannot be mapped to any other hosts. If you already have volumes created for the free pool of volumes, you must assign the volumes to the free pool. 4. Create host mappings between the volumes selected in step 3 and the VSS_FREE host to add the volumes to the free pool. Alternatively, you can use the ibmvcfg add command to add volumes to the free pool; see Example 5-26 on page 197.

196

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 05 Host Configuration Christian.fm

Example 5-26 Host mappings

IBM_2145:ITSO-CLS2:admin>svctask mkvdiskhostmap -host VSS_FREE msvc0001 Virtual Disk to Host map, id [0], successfully created IBM_2145:ITSO-CLS2:admin>svctask mkvdiskhostmap -host VSS_FREE msvc0002 Virtual Disk to Host map, id [1], successfully created 5. Verify that the volumes have been mapped. If you do not use the default WWPNs 5000000000000000 and 5000000000000001, you must configure the IBM System Storage hardware provider with the WWPNs; see Example 5-27.
Example 5-27 Verify hosts

IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap VSS_FREE id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID 2 VSS_FREE 0 10 msvc0001 5000000000000000 6005076801A180E90800000000000012 2 VSS_FREE 1 11 msvc0002 5000000000000000 6005076801A180E90800000000000013

5.7.6 Changing the configuration parameters


You can change the parameters that you defined when you installed the IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software. Therefore, you must use the ibmvcfg.exe utility. It is a command-line utility located in C:\Program Files\IBM\Hardware Provider for VSS-VDS directory; see Example 5-28.
Example 5-28 Using ibmvcfg.exe utility help

C:\Program Files\IBM\Hardware Provider for VSS-VDS>ibmvcfg.exe IBM System Storage VSS Provider Configuration Tool Commands ---------------------------------------ibmvcfg.exe <command> <command arguments> Commands: /h | /help | -? | /? showcfg listvols <all|free|unassigned> add <volume esrial number list> (separated by spaces) rem <volume serial number list> (separated by spaces) Configuration: set user <CIMOM user name> set password <CIMOM password> set trace [0-7] set trustpassword <trustpassword> set truststore <truststore location> set usingSSL <YES | NO> set vssFreeInitiator <WWPN> set vssReservedInitiator <WWPN> set FlashCopyVer <1 | 2> (only applies to ESS) set cimomPort <PORTNUM> set cimomHost <Hostname> set namespace <Namespace>

Chapter 5. Host configuration

197

7933 05 Host Configuration Christian.fm

Draft Document for Review January 17, 2012 6:10 am

set targetSVC <svc_cluster_ip> set backgroundCopy <0-100> Table 5-2 lists the available commands.
Table 5-2 Available ibmvcfg.util commands Command ibmvcfg showcfg ibmvcfg set username <username> ibmvcfg set password <password> Description This lists the current settings. This sets the user name to access the SAN Volume Controller Console. This sets the password of the user name that will access the SAN Volume Controller Console. This specifies the IP address of the SAN Volume Controller on which the volumes are located when volumes are moved to and from the free pool with the ibmvcfg add and ibmvcfg rem commands. The IP address is overridden if you use the -s flag with the ibmvcfg add and ibmvcfg rem commands. This sets the background copy rate for FlashCopy. This specifies whether to use Secure Sockets Layer protocol to connect to the SAN Volume Controller Console. This specifies the SAN Volume Controller Console port number. The default value is 5999. This sets the name of the server where the SAN Volume Controller Console is installed. This specifies the namespace value that the Master Console is using. The default value is \root\ibm. This specifies the WWPN of the host. The default value is 5000000000000000. Modify this value only if there is a host already in your environment with a WWPN of 5000000000000000. Example ibmvcfg showcfg ibmvcfg set username Dan

ibmvcfg set password mypassword

ibmvcfg set targetSVC <ipaddress>

set targetSVC 9.43.86.120

set backgroundCopy ibmvcfg set usingSSL

set backgroundCopy 80 ibmvcfg set usingSSL yes

ibmvcfg set cimomPort <portnum>

ibmvcfg set cimomPort 5999

ibmvcfg set cimomHost <server name> ibmvcfg set namespace <namespace>

ibmvcfg set cimomHost cimomserver ibmvcfg set namespace \root\ibm

ibmvcfg set vssFreeInitiator <WWPN>

ibmvcfg set vssFreeInitiator 5000000000000000

198

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 05 Host Configuration Christian.fm

Command ibmvcfg set vssReservedInitiator <WWPN>

Description This specifies the WWPN of the host. The default value is 5000000000000001. Modify this value only if there is a host already in your environment with a WWPN of 5000000000000001. This lists all volumes, including information about the size, location, and host mappings. This lists all volumes, including information about the size, location, and host mappings. This lists the volumes that are currently in the free pool. This lists the volumes that are currently not mapped to any hosts. This adds one or more volumes to the free pool of volumes. Use the -s parameter to specify the IP address of the SAN Volume Controller where the volumes are located. The -s parameter overrides the default IP address that is set with the ibmvcfg set targetSVC command. This removes one or more volumes from the free pool of volumes. Use the -s parameter to specify the IP address of the SAN Volume Controller where the volumes are located. The -s parameter overrides the default IP address that is set with the ibmvcfg set targetSVC command.

Example ibmvcfg set vssFreeInitiator 5000000000000001

ibmvcfg listvols

ibmvcfg listvols

ibmvcfg listvols all

ibmvcfg listvols all

ibmvcfg listvols free ibmvcfg listvols unassigned

ibmvcfg listvols free ibmvcfg listvols unassigned

ibmvcfg add -s ipaddress

ibmvcfg add vdisk12 ibmvcfg add 600507 68018700035000000 0000000BA -s 66.150.210.141

ibmvcfg rem -s ipaddress

ibmvcfg rem vdisk12 ibmvcfg rem 600507 68018700035000000 0000000BA -s 66.150.210.141

5.8 Specific Linux (on x86 / x86_64) information


The following sections describe specific information pertaining to the connection of Linux on Intel-based hosts to the SVC environment.

5.8.1 Configuring the Linux host


Follow these steps to configure the Linux host: 1. Use the latest firmware levels on your host system. 2. Install the HBA or HBAs on the Linux server, as described in 5.5.4, Host adapter installation and configuration on page 173.

Chapter 5. Host configuration

199

7933 05 Host Configuration Christian.fm

Draft Document for Review January 17, 2012 6:10 am

3. Install the supported HBA driver/firmware and upgrade the kernel if required. 4. Connect the Linux server FC host adapters to the switches. 5. Configure the switches (zoning) if needed. 6. Install SDD for Linux, as described in 5.8.5, Multipathing in Linux on page 201. 7. Configure the host, volumes, and host mapping in the SAN Volume Controller. 8. Rescan for LUNs on the Linux server to discover the volumes that were created on the SVC.

5.8.2 Configuration information


The SAN Volume Controller supports hosts that run the following Linux distributions: Red Hat Enterprise Linux SUSE Linux Enterprise Server For the latest information, always refer to the following website: http://www.ibm.com/storage/support/2145 This website provides the hardware list for supported HBAs and device driver levels for Linux. Check the supported firmware and driver level for your HBA, and follow the manufactures instructions to upgrade the firmware and driver levels for each type of HBA.

5.8.3 Disabling automatic Linux system updates


Many Linux distributions give you the ability to configure your systems for automatic system updates. Red Hat provides this ability in the form of a program called up2date. Novell SUSE provides the YaST Online Update utility. These features periodically query for updates that are available for each host and they can be configured to automatically install any new updates that they find. Often, the automatic update process also upgrades the system to the latest kernel level. Hosts running SDD must turn off the automatic update of kernel levels, because certain drivers that are supplied by IBM, such as SDD, are dependent on a specific kernel and will cease to function on a new kernel. Similarly, HBA drivers need to be compiled against specific kernels to function optimally. By allowing automatic updates of the kernel, you risk affecting your host systems unexpectedly.

5.8.4 Setting queue depth with QLogic HBAs


The queue depth is the number of I/O operations that can be run in parallel on a device. Configure your host running the Linux operating system by using the formula that is specified in 5.13, Calculating the queue depth on page 224. Perform the following steps to set the maximum queue depth: 1. Add the following line to the /etc/modules.conf file: For the 2.4 kernel (SUSE Linux Enterprise Server 8 or Red Hat Enterprise Linux): options qla2300 ql2xfailover=0 ql2xmaxqdepth=new_queue_depth For the 2.6 kernel (SUSE Linux Enterprise Server 9, or later, or Red Hat Enterprise Linux 4, or later): options qla2xxx ql2xfailover=0 ql2xmaxqdepth=new_queue_depth

200

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 05 Host Configuration Christian.fm

2. Rebuild the RAM disk that is associated with the kernel being used by using one of the following commands: If you are running on a SUSE Linux Enterprise Server operating system, run the mk_initrd command. If you are running on a Red Hat Enterprise Linux operating system, run the mkinitrd command, and then restart.

5.8.5 Multipathing in Linux


Red Hat Enterprise Linux 5 and later and SUSE Linux Enterprise Server 10 and later provide their own multipath support by the operating system. On older systems, it is necessary to install the IBM SDD multipath driver.

Installing SDD
This section describes how to install SDD for older distributions. Before performing these steps, always check for the currently supported levels, as described in 5.8.2, Configuration information on page 200. The cat /proc/scsi/scsi command displayed in Example 5-29 shows the devices that the SCSI driver has probed. In our configuration, we have two HBAs installed in our server, and we configured the zoning to access our volume from four paths.
Example 5-29 cat /proc/scsi/scsi command example

[root@diomede sdd]# cat /proc/scsi/scsi Attached devices: Host: scsi4 Channel: 00 Id: 00 Lun: 00 Vendor: IBM Model: 2145 Type: Unknown Host: scsi5 Channel: 00 Id: 00 Lun: 00 Vendor: IBM Model: 2145 Type: Unknown [root@diomede sdd]#

Rev: 0000 ANSI SCSI revision: 04 Rev: 0000 ANSI SCSI revision: 04

The rpm -ivh IBMsdd-1.6.3.0-5.i686.rhel4.rpm command installs the package, as shown in Example 5-30.
Example 5-30 rpm command example

[root@Palau sdd]# rpm -ivh IBMsdd-1.6.3.0-5.i686.rhel4.rpm Preparing... ########################################### [100%] 1:IBMsdd ########################################### [100%] Added following line to /etc/inittab: srv:345:respawn:/opt/IBMsdd/bin/sddsrv > /dev/null 2>&1 [root@Palau sdd]# To manually load and configure SDD on Linux, use the service sdd start command (SUSE Linux users can use the sdd start command). If you are not running a supported kernel, you will get an error message. If your kernel is supported, you see an OK success message, as shown in Example 5-31 on page 202.

Chapter 5. Host configuration

201

7933 05 Host Configuration Christian.fm

Draft Document for Review January 17, 2012 6:10 am

Example 5-31 Supported kernel for SDD

[root@Palau sdd]# sdd start Starting IBMsdd driver load: [ Issuing killall sddsrv to trigger respawn... Starting IBMsdd configuration: [ OK OK ] ]

Issue the cfgvpath query command to view the name and serial number of the volume that is configured in the SAN Volume Controller, as shown in Example 5-32.
Example 5-32 cfgvpath query example

[root@Palau ~]# cfgvpath query RTPG command = a3 0a 00 00 00 00 00 00 08 20 00 00 total datalen=52 datalen_str=0x00 00 00 30 RTPG succeeded: sd_name=/dev/sda df_ctlr=0 /dev/sda ( 8, 0) host=0 ch=0 id=0 lun=0 vid=IBM pid=2145 serial=60050768018201bee000000000000035 lun_id=60050768018201bee000000000000035 ctlr_flag=1 ctlr_nbr=1 df_ctlr=0 RTPG command = a3 0a 00 00 00 00 00 00 08 20 00 00 total datalen=52 datalen_str=0x00 00 00 30 RTPG succeeded: sd_name=/dev/sdb df_ctlr=0 /dev/sdb ( 8, 16) host=0 ch=0 id=1 lun=0 vid=IBM pid=2145 serial=60050768018201bee000000000000035 lun_id=60050768018201bee000000000000035 ctlr_flag=1 ctlr_nbr=0 df_ctlr=0 RTPG command = a3 0a 00 00 00 00 00 00 08 20 00 00 total datalen=52 datalen_str=0x00 00 00 30 RTPG succeeded: sd_name=/dev/sdc df_ctlr=0 /dev/sdc ( 8, 32) host=1 ch=0 id=0 lun=0 vid=IBM pid=2145 serial=60050768018201bee000000000000035 lun_id=60050768018201bee000000000000035 ctlr_flag=1 ctlr_nbr=0 df_ctlr=0 RTPG command = a3 0a 00 00 00 00 00 00 08 20 00 00 total datalen=52 datalen_str=0x00 00 00 30 RTPG succeeded: sd_name=/dev/sdd df_ctlr=0 /dev/sdd ( 8, 48) host=1 ch=0 id=1 lun=0 vid=IBM pid=2145 serial=60050768018201bee000000000000035 lun_id=60050768018201bee000000000000035 ctlr_flag=1 ctlr_nbr=1 df_ctlr=0 [root@Palau ~]# The cfgvpath command configures the SDD vpath devices, as shown in Example 5-33.
Example 5-33 cfgvpath command example

[root@Palau ~]# cfgvpath c--------- 1 root root 253, 0 Jun 5 WARNING: vpatha path sda has WARNING: vpatha path sdb has WARNING: vpatha path sdc has WARNING: vpatha path sdd has Writing out new configuration to file [root@Palau ~]#

09:04 /dev/IBMsdd already been configured. already been configured. already been configured. already been configured. /etc/vpath.conf

The configuration information is saved by default in the /etc/vpath.conf file. You can save the configuration information to a specified file name by entering the following command: cfgvpath -f file_name.cfg 202
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 05 Host Configuration Christian.fm

Issue the chkconfig command to enable SDD to run at system startup: chkconfig sdd on To verify the setting, enter the following command: chkconfig --list sdd This verification is shown in Example 5-34.
Example 5-34 sdd run level example

[root@Palau sdd]# chkconfig --list sdd sdd 0:off 1:off 2:on [root@Palau sdd]#

3:on

4:on

5:on

6:off

If necessary, you can disable the startup option by entering this command: chkconfig sdd off Run the datapath query commands to display the online adapters and the paths to the adapters. Notice that the preferred paths are used from one of the nodes, that is, path 0 and path 2. Path 1 and path 3 connect to the other node and are used as alternate or backup paths for high availability, as shown in Example 5-35.
Example 5-35 datapath query command example

[root@Palau ~]# datapath query adapter Active Adapters :2 Adpt# Name State Mode 0 Host0Channel0 NORMAL ACTIVE 1 Host1Channel0 NORMAL ACTIVE [root@Palau ~]# [root@Palau ~]# datapath query device Total Devices : 1 Select 1 0 Errors 0 0 Paths 2 2 Active 0 0

DEV#: 0 DEVICE NAME: vpatha TYPE: 2145 POLICY: Optimized Sequential SERIAL: 60050768018201bee000000000000035 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Host0Channel0/sda CLOSE NORMAL 1 0 1 Host0Channel0/sdb CLOSE NORMAL 0 0 2 Host1Channel0/sdc CLOSE NORMAL 0 0 3 Host1Channel0/sdd CLOSE NORMAL 0 0 [root@Palau ~]# SDD has three path-selection policy algorithms: Failover only (fo): All I/O operations for the device are sent to the same (preferred) path unless the path fails because of I/O errors. Then, an alternate path is chosen for subsequent I/O operations. Load balancing (lb): The path to use for an I/O operation is chosen by estimating the load on the adapter to which each path is attached. The load is a function of the number of I/O operations currently in process. If multiple paths have the same load, a path is chosen at

Chapter 5. Host configuration

203

7933 05 Host Configuration Christian.fm

Draft Document for Review January 17, 2012 6:10 am

random from those paths. Load-balancing mode also incorporates failover protection. The load-balancing policy is also known as the optimized policy. Round-robin (rr): The path to use for each I/O operation is chosen at random from paths that were not used for the last I/O operation. If a device has only two paths, SDD alternates between the two paths. You can dynamically change the SDD path-selection policy algorithm by using the datapath set device policy SDD command. You can see the SDD path-selection policy algorithm that is active on the device when you use the datapath query device command. Example 5-35 on page 203 shows that the active policy is optimized, which means that the SDD path-selection policy algorithm active is Optimized Sequential. Example 5-36 shows the volume information from the SVC command-line interface.
Example 5-36 svcinfo redhat1

IBM_2145:ITSOSVC42A:admin>svcinfo lshost linux2 id 6 name linux2 port_count 2 type generic mask 1111 iogrp_count 4 WWPN 210000E08B89C1CD node_logged_in_count 2 state active WWPN 210000E08B054CAA node_logged_in_count 2 state active IBM_2145:ITSOSVC42A:admin> IBM_2145:ITSOSVC42A:admin>svcinfo lshostvdiskmap linux2 id name SCSI_id vdisk_id wwpn vdisk_UID 6 linux2 0 33 210000E08B89C1CD 60050768018201BEE000000000000035 IBM_2145:ITSOSVC42A:admin> IBM_2145:ITSOSVC42A:admin>svcinfo lsvdisk linux_vd1 id 33 name linux_vd1 IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 1 mdisk_grp_name MDG0 capacity 1.0GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name 204

vdisk_name linux_vd1

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 05 Host Configuration Christian.fm

vdisk_UID 60050768018201BEE000000000000035 throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite udid 0 fc_map_count 0 IBM_2145:ITSOSVC42A:admin>

5.8.6 Creating and preparing the SDD volumes for use


Follow these steps to create and prepare the volumes: 1. Create a partition on the vpath device, as shown in Example 5-37.
Example 5-37 fdisk example

[root@Palau ~]# fdisk /dev/vpatha Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): m Command action a toggle a bootable flag b edit bsd disklabel c toggle the dos compatibility flag d delete a partition l list known partition types m print this menu n add a new partition o create a new empty DOS partition table p print the partition table q quit without saving changes s create a new empty Sun disklabel t change a partition's system id u change display/entry units v verify the partition table w write table to disk and exit x extra functionality (experts only) Command (m for help): n Command action e extended p primary partition (1-4) e Partition number (1-4): 1 First cylinder (1-1011, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-1011, default 1011): Using default value 1011 Command (m for help): w
Chapter 5. Host configuration

205

7933 05 Host Configuration Christian.fm

Draft Document for Review January 17, 2012 6:10 am

The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. [root@Palau ~]# 2. Create a file system on the vpath, as shown in Example 5-38.
Example 5-38 mkfs command example

[root@Palau ~]# mkfs -t ext3 /dev/vpatha mke2fs 1.35 (28-Feb-2004) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 131072 inodes, 262144 blocks 13107 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=268435456 8 block groups 32768 blocks per group, 32768 fragments per group 16384 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376 Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 27 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@Palau ~]# 3. Create the mount point, and mount the vpath drive, as shown in Example 5-39.
Example 5-39 Mount point

[root@Palau ~]# mkdir /itsosvc [root@Palau ~]# mount -t ext3 /dev/vpatha /itsosvc 4. The drive is now ready for use. The df command shows us the mounted disk /itsosvc, and the datapath query command shows that four paths are available; see Example 5-40.
Example 5-40 Display mounted drives

[root@Palau ~]# df Filesystem 1K-blocks /dev/mapper/VolGroup00-LogVol00 74699952 /dev/hda1 101086 none 1033136 /dev/vpatha 1032088 [root@Palau ~]#

Used Available Use% Mounted on 2564388 13472 0 34092 68341032 82395 1033136 945568 4% 15% 0% 4% / /boot /dev/shm /itsosvc

[root@Palau ~]# datapath query device

206

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 05 Host Configuration Christian.fm

Total Devices : 1

DEV#: 0 DEVICE NAME: vpatha TYPE: 2145 POLICY: Optimized Sequential SERIAL: 60050768018201bee000000000000035 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Host0Channel0/sda OPEN NORMAL 1 0 1 Host0Channel0/sdb OPEN NORMAL 6296 0 2 Host1Channel0/sdc OPEN NORMAL 6178 0 3 Host1Channel0/sdd OPEN NORMAL 0 0 [root@Palau ~]#

5.8.7 Using the operating system Device Mapper Multipath (DM-MPIO)


Red Hat Enterprise Linux 5 and later and SUSE Linux Enterprise Server 10 and later provide their own multipath support for the operating system. Therefore, you do not have to install an additional device driver. Always check whether your operating system includes one of the supported multipath drivers. You will find this information in the links that are provided in 5.8.2, Configuration information on page 200. In SLES10, the multipath drivers and tools are installed by default. For RHEL5, though, the user has to explicitly choose the multipath components during the operating system installation to install them. Each of the attached SAN Volume Controller LUNs has a special device file in the Linux /dev directory. Hosts that use 2.6 kernel Linux operating systems can have as many FC disks as the SVC allows. The following website provides the most current information about the maximum configuration for the SAN Volume Controller: http://www.ibm.com/storage/support/2145

5.8.8 Creating and preparing DM-MPIO volumes for use


First, you have to start the MPIO daemon on your system. Run the following commands on your host system: 1. Enable MPIO for SLES10 by running the following commands: /etc/init.d/boot.multipath {start|stop} /etc/init.d/multipathd {start|stop|status|try-restart|restart|force-reload|reload|probe}

Tip: Run insserv boot.multipath multipathd to automatically load the multipath driver and multipathd daemon during startup. 2. Enable MPIO for RHEL5 by running the following commands: modprobe dm-multipath modprobe dm-round-robin service multipathd start chkconfig multipathd on

Chapter 5. Host configuration

207

7933 05 Host Configuration Christian.fm

Draft Document for Review January 17, 2012 6:10 am

Example 5-41 shows the commands issued on a Red Hat Enterprise Linux 5.1 operating system.
Example 5-41 Starting MPIO daemon on Red Hat Enterprise Linux

[root@palau [root@palau [root@palau [root@palau

~]# modprobe dm-round-robin ~]# multipathd start ~]# chkconfig multipathd on ~]#

3. Open the multipath.conf file and follow the instructions to enable multipathing for IBM devices. The file is located in the /etc directory. Example 5-42 shows editing using vi.
Example 5-42 Editing the multipath.conf file

[root@palau etc]# vi multipath.conf 4. Add the following entry to the multipath.conf file: device { vendor "IBM" product "2145" path_grouping_policy group_by_prio prio_callout "/sbin/mpath_prio_alua /dev/%n" } Note: Example multipath.conf files can be downloaded from the IBM Subsystem Device Driver for Linux website at http://ibm.com/support/docview.wss?uid=ssg1S4000107#DM 5. Restart the multipath daemon; see Example 5-43.
Example 5-43 Stopping and starting the multipath daemon

[root@palau ~]# service multipathd stop Stopping multipathd daemon: [root@palau ~]# service multipathd start Starting multipathd daemon:

[ [

OK OK

] ]

6. Type the multipath -dl command to see the mpio configuration. You will see two groups with two paths each. All paths must have the state [active][ready] and one group will be [enabled].

208

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 05 Host Configuration Christian.fm

7. Use the fdisk command to create a partition on the SVC disk, as shown in Example 5-44.
Example 5-44 fdisk

[root@palau scsi]# fdisk -l Disk /dev/hda: 80.0 GB, 80032038912 bytes 255 heads, 63 sectors/track, 9730 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot /dev/hda1 * /dev/hda2 Start 1 14 End 13 9730 Blocks 104391 78051802+ Id 83 8e System Linux Linux LVM

Disk /dev/sda: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sda doesn't contain a valid partition table Disk /dev/sdb: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sdb doesn't contain a valid partition table Disk /dev/sdc: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sdc doesn't contain a valid partition table Disk /dev/sdd: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sdd doesn't contain a valid partition table Disk /dev/sde: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sde doesn't contain a valid partition table Disk /dev/sdf: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sdf doesn't contain a valid partition table Disk /dev/sdg: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sdg doesn't contain a valid partition table

Chapter 5. Host configuration

209

7933 05 Host Configuration Christian.fm

Draft Document for Review January 17, 2012 6:10 am

Disk /dev/sdh: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sdh doesn't contain a valid partition table Disk /dev/dm-2: 4244 MB, 4244635648 bytes 255 heads, 63 sectors/track, 516 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk /dev/dm-2 doesn't contain a valid partition table Disk /dev/dm-3: 4244 MB, 4244635648 bytes 255 heads, 63 sectors/track, 516 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk /dev/dm-3 doesn't contain a valid partition table [root@palau scsi]# fdisk /dev/dm-2 Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): n Command action e extended p primary partition (1-4) e Partition number (1-4): 1 First cylinder (1-516, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-516, default 516): Using default value 516 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: Re-reading the partition table failed with error 22: Invalid argument. The kernel still uses the old table. The new table will be used at the next reboot. [root@palau scsi]# shutdown -r now

210

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 05 Host Configuration Christian.fm

8. Create a file system using the mkfs command (Example 5-45).


Example 5-45 mkfs command

[root@palau ~]# mkfs -t ext3 /dev/dm-2 mke2fs 1.39 (29-May-2006) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 518144 inodes, 1036288 blocks 51814 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=1061158912 32 block groups 32768 blocks per group, 32768 fragments per group 16192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736 Writing inode tables: done Creating journal (16384 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 29 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@palau ~]# 9. Create a mount point, and mount the drive, as shown in Example 5-46.
Example 5-46 Mount point

[root@palau ~]# mkdir /svcdisk_0 [root@palau ~]# cd /svcdisk_0/ [root@palau svcdisk_0]# mount -t ext3 /dev/dm-2 /svcdisk_0 [root@palau svcdisk_0]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 73608360 1970000 67838912 3% / /dev/hda1 101086 15082 80785 16% /boot tmpfs 967984 0 967984 0% /dev/shm /dev/dm-2 4080064 73696 3799112 2% /svcdisk_0

5.9 VMware configuration information


This section explains the requirements and additional information for attaching the SAN Volume Controller to a variety of guest host operating systems running on the VMware operating system.

5.9.1 Configuring VMware hosts


To configure the VMware hosts, follow these steps: 1. Install the HBAs in your host system, as described in 5.9.3, HBAs for hosts running VMware on page 212.

Chapter 5. Host configuration

211

7933 05 Host Configuration Christian.fm

Draft Document for Review January 17, 2012 6:10 am

2. Connect the server FC host adapters to the switches. 3. Configure the switches (zoning), as described in 5.9.4, VMware storage and zoning guidance on page 212. 4. Install the VMware operating system (if not already done) and check the HBA timeouts, as described in 5.9.5, Setting the HBA timeout for failover in VMware on page 213. 5. Configure the host, volumes, and host mapping in the SVC, as described in 5.9.7, Attaching VMware to volumes on page 214.

5.9.2 Operating system versions and maintenance levels


For the latest information about VMware support, refer to this website: http://ibm.com/systems/storage/software/virtualization/svc/interop.html At the time of writing, the following versions are supported: ESX 4.x ESX V3.5

5.9.3 HBAs for hosts running VMware


Ensure that your hosts that are running on VMware operating systems use the correct HBAs and firmware levels. Install the host adapters in your system. Refer to the manufacturers instructions for installation and configuration of the HBAs. For older ESX versions, you will find the supported HBAs at the IBM website: http://ibm.com/storage/support/2145 Mostly the supported HBA device drivers are already included in the ESX server build, but for various newer storage adapters it can be required to load additional ESX drivers. Check the VMware HCL if you need to load a custom driver for your adapter: http://www.vmware.com/resources/compatibility/search.php After installing, load the default configuration of your FC HBAs. Use the same model of HBA with the same firmware in one server. It is not supported to have Emulex and QLogic HBAs that access the same target in one server.

SAN boot support


SAN boot of any guest operating system is supported under VMware. The nature of VMware means that SAN boot is a requirement on any guest operating system. The guest operating system must reside on a SAN disk. If you are unfamiliar with the VMware environments and the advantages of storing virtual machines and application data on a SAN, it is useful to get an overview about VMware products before continuing. VMware documentation is available at this website: http://www.vmware.com/support/pubs/

5.9.4 VMware storage and zoning guidance


The VMware ESX server can use a Virtual Machine File System (VMFS). VMFS is a file system that is optimized to run multiple virtual machines as one workload to minimize disk

212

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 05 Host Configuration Christian.fm

I/O. It is also able to handle concurrent access from multiple physical machines, because it enforces the appropriate access controls. Therefore, multiple ESX hosts can share the same set of LUNs. Theoretically, you can run all of your virtual machines on one LUN. However, for performance reasons in more complex scenarios, it can be better to load balance virtual machines over separate HBAs, storages, or arrays. If you run an ESX host, for example, with several virtual machines, it makes sense to use one slow array, for example, for Print and Active Directory Services guest operating systems without high I/O, and another fast array for database guest operating systems. Using fewer volumes has the following advantages: More flexibility to create virtual machines without creating new space on the SVC More possibilities for taking VMware snapshots Fewer volumes to manage Using more and smaller volumes has the following advantages: Separate I/O characteristics of the guest operating systems More flexibility (the multipathing policy and disk shares are set per volume) Microsoft Cluster Service requires its own volume for each cluster disk resource More documentation about designing your VMware infrastructure is provided at one of these websites: http://www.vmware.com/vmtn/resources/ http://www.vmware.com/resources/techresources/1059 Guidelines: ESX Server hosts that use shared storage for virtual machine failover or load balancing must be in the same zone. You can have only one VMFS volume per volume.

5.9.5 Setting the HBA timeout for failover in VMware


The timeout for failover for ESX hosts must be set to 30 seconds: For QLogic HBAs, the timeout depends on the PortDownRetryCount parameter. The timeout value is 2 x PortDownRetryCount + 5 sec. Set the qlport_down_retry parameter to 14. For Emulex HBAs, the lpfc_linkdown_tmo and the lpcf_nodev_tmo parameters must be set to 30 seconds. To make these changes on your system, perform the following steps; see Example 5-47 on page 213: 1. 2. 3. 4. 5. Back up the /etc/vmware/esx.cof file. Open the /etc/vmware/esx.cof file for editing. The file includes a section for every installed SCSI device. Locate your SCSI adapters, and edit the previously described parameters. Repeat this process for every installed HBA.

Example 5-47 Setting the HBA timeout

[root@nile svc]# cp /etc/vmware/esx.conf /etc/vmware/esx.confbackup

Chapter 5. Host configuration

213

7933 05 Host Configuration Christian.fm

Draft Document for Review January 17, 2012 6:10 am

[root@nile svc]# vi /etc/vmware/esx.conf

5.9.6 Multipathing in ESX


The VMware ESX Server performs multipathing. You do not need to install an additional multipathing driver, such as SDD.

5.9.7 Attaching VMware to volumes


First, we make sure that the VMware host is logged into the SAN Volume Controller. In our examples, we use the VMware ESX server V3.5 and the host name Nile. Enter the following command to check the status of the host: svcinfo lshost <hostname> Example 5-48 shows that the host Nile is logged into the SVC with two HBAs.
Example 5-48 lshost Nile

IBM_2145:ITSO-CLS1:admin>svcinfo lshost Nile id 1 name Nile port_count 2 type generic mask 1111 iogrp_count 2 WWPN 210000E08B892BCD node_logged_in_count 4 state active WWPN 210000E08B89B8C0 node_logged_in_count 4 state active Then, we have to set the SCSI Controller Type in VMware. By default, ESX Server disables the SCSI bus sharing and does not allow multiple virtual machines to access the same VMFS file at the same time; see Figure 5-35 on page 215. But in many configurations, such as those configurations for high availability, the virtual machines have to share the same VMFS file to share a disk. To set the SCSI Controller Type in VMware: 1. Log on to your Infrastructure Client, shut down the virtual machine, right-click it, and select Edit settings. 2. Highlight the SCSI Controller, and select one of the three available settings, depending on your configuration: None: Disks cannot be shared by other virtual machines. Virtual: Disks can be shared by virtual machines on the same server. Physical: Disks can be shared by virtual machines on any server. Click OK to apply the setting.

214

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 05 Host Configuration Christian.fm

Figure 5-35 Changing SCSI bus settings

3. Create your volumes on the SVC, then map them to the ESX hosts. Tips: If you want to use features, such as VMotion, the volumes that own the VMFS file have to be visible to every ESX host that will be able to host the virtual machine. In SVC, select Allow the virtual disks to be mapped even if they are already mapped to a host. The volume has to have the same SCSI ID on each ESX host. For this configuration, we created one volume and mapped it to our ESX host, as shown in Example 5-49.
Example 5-49 Mapped volume to ESX host Nile

IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap Nile id name SCSI_id vdisk_id vdisk_name wwpn 1 Nile 0 12 VMW_pool 210000E08B892BCD 60050768018301BF2800000000000010

vdisk_UID

ESX does not automatically scan for SAN changes (except when rebooting the entire ESX server). If you have made any changes to your SVC or SAN configuration, perform the following steps: 1. Open your VMware Infrastructure Client. 2. Select the host. 3. In the Hardware window, choose Storage Adapters. 4. Click Rescan.

Chapter 5. Host configuration

215

7933 05 Host Configuration Christian.fm

Draft Document for Review January 17, 2012 6:10 am

To configure a storage device to use it in VMware, perform the following steps: 1. Open your VMware Infrastructure Client. 2. Select the host for which you want to see the assigned volumes, and click the Configuration tab. 3. In the Hardware window on the left side, click Storage. 4. To create a new storage pool, select click here to create a datastore or Add storage if the yellow field does not appear (Figure 5-36).

Figure 5-36 VMWare add datastore

5. The Add storage wizard will appear. 6. Select Create Disk/Lun, and click Next. 7. Select the SVC volume that you want to use for the datastore, and click Next. 8. Review the disk layout and click Next. 9. Enter a datastore name and click Next. 10.Select a block size, enter the size of the new partition, and then, click Next. 11.Review your selections, and click Finish. Now, the created VMFS datastore appears in the Storage window (Figure 5-37). You will see the details for the highlighted datastore. Check whether all of the paths are available and that the Path Selection is set to Round Robin.

Figure 5-37 VMWare storage configuration

216

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 05 Host Configuration Christian.fm

If not all of the paths are available, check your SAN and storage configuration. After fixing the problem, select Refresh to perform a path rescan. The view will be updated to the new configuration. Best practice is to use the Round Robin Multipath Policy for SVC. If you have to edit this policy, perform the following steps: 1. Highlight the datastore. 2. Click Properties. 3. Click Managed Paths. 4. Click Change (see Figure 5-37 on page 216). 5. Select Round Robin. 6. Click OK. 7. Click Close. Now, your VMFS datastore has been created, and you can start using it for your guest operating systems. Round Robin will distribute the I/O load across all available paths. If you do want to use a fixed path, the policy setting Fixed is supported as well.

5.9.8 Volume naming in VMware


In the Virtual Infrastructure Client, a volume is displayed as a sequence of three or four numbers, separated by colons (Figure 5-38): <SCSI HBA>:<SCSI target>:<SCSi volume>:<disk partition> where: SCSI HBA The number of the SCSI HBA (can change). SCSI target The number of the SCSI target (can change). SCSI volume The number of the volume (never changes). disk partition The number of the disk partition (never changes). If the last number is not displayed, the name stands for the entire volume.

Figure 5-38 Volume naming in VMware Chapter 5. Host configuration

217

7933 05 Host Configuration Christian.fm

Draft Document for Review January 17, 2012 6:10 am

5.9.9 Setting the Microsoft guest operating system timeout


For a Microsoft Windows 2000 Server or Windows Server 2003 installed as a VMware guest operating system, the disk timeout value must be set to 60 seconds. We provide the instructions to perform this task in 5.5.5, Changing the disk timeout on Microsoft Windows Server on page 173.

5.9.10 Extending a VMFS volume


It is possible to extend VMFS volumes while virtual machines are running. First, you have to extend the volume on the SVC, and then you are able to extend the VMFS volume. Before performing these steps, perform a backup of your data. Perform the following steps to extend a volume: 1. The volume can be expanded with the svctask expandvdisksize -size 1 -unit gb <VDiskname> command; see Example 5-50.
Example 5-50 Expanding a volume in SVC

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk VMW_pool id 12 name VMW_pool IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name MDG_DS45 capacity 60.0GB type striped formatted yes mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018301BF2800000000000010 throttling 0 preferred_node_id 2 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS45 type striped mdisk_id mdisk_name 218
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 05 Host Configuration Christian.fm

fast_write_state empty used_capacity 60.00GB real_capacity 60.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize IBM_2145:ITSO-CLS1:admin>svctask expandvdisksize -size 5 -unit gb VMW_pool IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk VMW_pool id 12 name VMW_pool IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name MDG_DS45 capacity 65.0GB type striped formatted yes mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018301BF2800000000000010 throttling 0 preferred_node_id 2 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS45 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 65.00GB real_capacity 65.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize IBM_2145:ITSO-CLS1:admin>

Chapter 5. Host configuration

219

7933 05 Host Configuration Christian.fm

Draft Document for Review January 17, 2012 6:10 am

2. Open the Virtual Infrastructure Client. 3. Select the host. 4. Select Configuration. 5. Select Storage Adapters. 6. Click Rescan. 7. Make sure that the Scan for new Storage Devices check box is marked, and click OK. After the scan has completed, the new capacity is displayed in the Details section. 8. Click Storage. 9. Right-click the VMFS volume and click Properties. 10.Click Add Extend. 11.Select the new free space, and click Next. 12.Click Next. 13.Click Finish. The VMFS volume has now been extended, and the new space is ready for use.

5.9.11 Removing a datastore from an ESX host


Before you remove a datastore from an ESX host, you have to migrate or delete all of the virtual machines that reside on this datastore. To remove it, perform the following steps: 1. Back up the data. 2. Open the Virtual Infrastructure Client. 3. Select the host. 4. Select Configuration. 5. Select Storage. 6. Highlight the datastore that you want to remove. 7. Click Remove. 8. Read the warning, and if you are sure that you want to remove the datastore and delete all of the data on it, click Yes. 9. Remove the host mapping on the SVC, or delete the volume (as shown in Example 5-51). 10.In the VI Client, select Storage Adapters. 11.Click Rescan. 12.Make sure that the Scan for new Storage Devices check box is marked, and click OK. 13.After the scan completes, the disk disappears from the view. Your datastore has been successfully removed from the system.
Example 5-51 Remove host mapping: Delete volume

IBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Nile VMW_pool IBM_2145:ITSO-CLS1:admin>svctask rmvdisk VMW_pool

220

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 05 Host Configuration Christian.fm

5.10 Sun Solaris support information


For the latest information about supported software and driver levels, always refer to this website: http://ibm.com/systems/storage/software/virtualization/svc/interop.html

5.10.1 Operating system versions and maintenance levels


At the time of writing, Sun Solaris 8, Sun Solaris 9, and Sun Solaris 10 are supported.

5.10.2 SDD dynamic pathing


Solaris supports dynamic pathing when you either add more paths to an existing volume, or present a new volume to a host. No user intervention is required. SDD is aware of the preferred paths that SVC sets per volume. SDD will use a round-robin algorithm when failing over paths. That is, it will try the next known preferred path. If this method fails and all preferred paths have been tried, it will use a round-robin algorithm on the non-preferred paths until it finds a path that is available. If all paths are unavailable, the volume will go offline. Therefore, it can take time to perform path failover when multiple paths go offline. SDD under Solaris performs load balancing across the preferred paths where appropriate.

Veritas Volume Manager with dynamic multipathing


Veritas Volume Manager (VM) with dynamic multipathing (DMP) automatically selects the next available I/O path for I/O requests without action from the administrator. VM with DMP is also informed when you repair or restore a connection, and when you add or remove devices after the system has been fully booted (provided that the operating system recognizes the devices correctly). The new Java Native Interface (JNI) drivers support the host mapping of new volumes without rebooting the Solaris host. Note the following support characteristics: Veritas VM with DMP supports load balancing across multiple paths with SVC. Veritas VM with DMP does not support preferred pathing with SVC.

Coexistence with SDD and Veritas VM with DMP


Veritas Volume Manager with DMP will coexist in pass-through mode with SDD. DMP will use the vpath devices that are provided by SDD.

OS cluster support
Solaris with Symantec Cluster V4.1, Symantec SFHA and SFRAC V4.1/5.0, and Solaris with Sun Cluster V3.1/3.2 are supported at the time of writing.

SAN boot support


Note the following support characteristics: Boot from SAN is supported under Solaris 9 running Symantec Volume Manager. Boot from SAN is not supported when SDD is used as the multipathing software.

Chapter 5. Host configuration

221

7933 05 Host Configuration Christian.fm

Draft Document for Review January 17, 2012 6:10 am

5.11 Hewlett-Packard UNIX configuration information


For the latest information about Hewlett-Packard UNIX (HP-UX) support, refer to this website: http://ibm.com/systems/storage/software/virtualization/svc/interop.html

5.11.1 Operating system versions and maintenance levels


At the time of writing, HP-UX V11.0 and V11i v1/v2/v3 are supported (64-bit only).

5.11.2 Multipath solutions supported


At the time of writing, SDD V1.6.3.0 for HP-UX is supported. Multipathing Software PV Link and Cluster Software Service Guard V11.14/11.16/11.17/11.18 are also supported, but in a cluster environment, we suggest to you use SDD.

SDD dynamic pathing


HP-UX supports dynamic pathing when you either add more paths to an existing volume, or present a new volume to a host. SDD is aware of the preferred paths that SVC sets per volume. SDD will use a round-robin algorithm when failing over paths. That is, it will try the next known preferred path. If this method fails and all preferred paths have been tried, it will use a round-robin algorithm on the non-preferred paths until it finds a path that is available. If all paths are unavailable, the volume will go offline. It can take time, therefore, to perform path failover when multiple paths go offline. SDD under HP-UX performs load balancing across the preferred paths where appropriate.

Physical volume links (PVLinks) dynamic pathing


Unlike SDD, PVLinks does not load balance and it is unaware of the preferred paths that SVC sets per volume. Therefore, it is strongly suggested that you use SDD, except when in a clustering environment or when using an SVC volume as your boot disk. When creating a Volume Group, specify the primary path that you want HP-UX to use when accessing the Physical Volume that is presented by SVC. This path, and only this path, will be used to access the PV as long as it is available, no matter what the SVCs preferred path to that volume is. Therefore, be careful when creating Volume Groups so that the primary links to the PVs (and load) are balanced over both HBAs, FC switches, SVC nodes, and so on. When extending a Volume Group to add alternate paths to the PVs, the order in which you add these paths is HP-UXs order of preference if the primary path becomes unavailable. Therefore, when extending a Volume Group, the first alternate path that you add must be from the same SVC node as the primary path, to avoid unnecessary node failover due to an HBA, FC link, or FC switch failure.

5.11.3 Coexistence of SDD and PV Links


If you want to multipath a volume with PVLinks while SDD is installed, you need to make sure that SDD does not configure a vpath for that volume. To do this, you need to put the serial number of any volumes that you want SDD to ignore in the /etc/vpathmanualexcl.cfg directory. In the case of SAN boot, if you are booting from an SVC volume, when you install SDD (from Version 1.6 onward), SDD will automatically ignore the boot volume.

222

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 05 Host Configuration Christian.fm

SAN boot support


SAN boot is supported on HP-UX by using PVLinks as the multipathing software on the boot device. You can use PVLinks or SDD to provide the multipathing support for the other devices that are attached to the system.

5.11.4 Using an SVC volume as a cluster lock disk


ServiceGuard does not provide a way to specify alternate links to a cluster lock disk. When using an SVC volume as your lock disk, if the path to FIRST_CLUSTER_LOCK_PV becomes unavailable, the HP node will not be able to access the lock disk if a 50-50 split in quorum occurs. To ensure redundancy, when editing your Cluster Configuration ASCII file, make sure that the variable FIRST_CLUSTER_LOCK_PV has a separate path to the lock disk for each HP node in your cluster. For example, when configuring a two-node HP cluster, make sure that FIRST_CLUSTER_LOCK_PV on HP server A is on a separate SVC node and through a separate FC switch than the FIRST_CLUSTER_LOCK_PV on HP server B.

5.11.5 Support for HP-UX with greater than eight LUNs


HP-UX will not recognize more than eight LUNS per port using the generic SCSI behavior. To accommodate this behavior, SVC supports a type associated with a host. This type can be set using the svctask mkhost command and modified using the svctask chhost command. The type can be set to generic, which is the default for HP-UX. When an initiator port, which is a member of a host of type HP-UX, accesses an SVC, the SVC will behave in the following way:

Flat Space Addressing mode is used rather than the Peripheral Device Addressing Mode.
When an inquiry command for any page is sent to LUN 0 using Peripheral Device Addressing, it is reported as Peripheral Device Type 0Ch (controller). When any command other than an inquiry is sent to LUN 0 using Peripheral Device Addressing, SVC will respond as an unmapped LUN 0 normally responds. When an inquiry is sent to LUN 0 using Flat Space Addressing, it is reported as Peripheral Device Type 00h (Direct Access Device) if a LUN is mapped at LUN 0 or 1Fh Unknown Device Type. When an inquiry is sent to an unmapped LUN that is not LUN 0 using Peripheral Device Addressing, the Peripheral qualifier returned is 001b and the Peripheral Device type is 1Fh (unknown or no device type). This response is in contrast to the behavior for generic hosts, where peripheral Device Type 00h is returned.

5.12 Using SDDDSM, SDDPCM, and SDD web interface


After installing the SDDDSM or SDD driver, there are specific commands available. To open a command window for SDDDSM or SDD, from the desktop, select Start Programs Subsystem Device Driver Subsystem Device Driver Management. The command documentation for the various operating systems is available in the Multipath Subsystem Device Driver User Guides: http://ibm.com/support/docview.wss?uid=ssg1S7000303

Chapter 5. Host configuration

223

7933 05 Host Configuration Christian.fm

Draft Document for Review January 17, 2012 6:10 am

It is also possible to configure SDDDSM to offer a web interface which provides some very basic information. Before this configuration can work, we need to configure the web interface. Sddsrv does not bind to any TCP/IP port by default, but it allows port binding to be dynamically enabled or disabled. For all platforms except Linux, the multipath driver package ships an sddsrv.conf template file named the sample_sddsrv.conf file. On all UNIX platforms except Linux, the sample_sddsrv.conf file is located in the /etc directory. On Windows platforms it is located in the directory where SDDDSM was installed. You must use the sample_sddsrv.conf file to create the sddsrv.conf file in the same directory as the sample_sddsrv.conf file by simply copying it and naming the copied file sddsrv.conf. You can then dynamically change port binding by modifying the parameters in the sddsrv.conf file and changing the values of Enableport and Loopbackbind to True. Figure 5-39 shows the start window of the multipath driver web interface.

Figure 5-39 SDD web interface

5.13 Calculating the queue depth


The queue depth is the number of I/O operations that can be run in parallel on a device. It is usually possible to set a limit on the queue depth on the SDD paths (or equivalent) or the HBA. Ensure that you configure the servers to limit the queue depth on all of the paths to the SAN Volume Controller disks in configurations that contain a large number of servers or volumes. You might have a number of servers in the configuration that are idle, or do not initiate the calculated quantity of I/O operations. In that case, you might not need to limit the queue depth. 224
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 05 Host Configuration Christian.fm

5.14 Further sources of information


For more information about host attachment and configuration to the SVC refer to IBM System Storage SAN Volume Controller: Host Attachment Users Guide, SC26-7905. For more information about SDDDSM configuration refer to the latest IBM System Storage Multipath Subsystem Device Driver Users Guide available from: http://ibm.com/support/docview.wss?uid=ssg1S7000303 The IBM SVC Information Center provides comprehensive information on host attachment and storage subsystem attachment, troubleshooting and much more: http://publib.boulder.ibm.com/infocenter/svc/ic/index.jsp

5.14.1 Publications containing SVC storage subsystem attachment guidelines


It is beyond the scope of this document to describe the attachment to each subsystem that the SVC supports. Here is a short list of what we found especially useful in the writing of this book, and in the field: SAN Volume Controller Best Practices and Performance Guidelines, SG24-7521, describes in detail how you can optimize your back-end storage to maximize your performance on the SVC. http://www.redbooks.ibm.com/abstracts/sg247521.html?Open DS8000 Performance Monitoring and Tuning, SG24-7146, describes the guidelines and procedures to make the most of the performance that is available from your DS8000 storage subsystem when attached to the IBM SAN Volume Controller. http://www.redbooks.ibm.com/abstracts/sg247146.html?Open IBM Midrange System Storage Implementation and Best Practices Guide, SG24-6363, explains how to connect and configure your storage for optimized performance on the SVC. http://www.redbooks.ibm.com/abstracts/sg246363.html?Open IBM XIV Storage System: Architecture, Implementation and Usage, SG24-7659, discusses specific considerations for attaching the XIV Storage System to a SAN Volume Controller. http://www.redbooks.ibm.com/abstracts/sg247659.html?Open

Chapter 5. Host configuration

225

7933 05 Host Configuration Christian.fm

Draft Document for Review January 17, 2012 6:10 am

226

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

Chapter 6.

Data migration
In this chapter we explain how to migrate from a conventional storage infrastructure to a virtualized storage infrastructure by using the IBM System Storage SAN Volume Controller (SVC). We also explain how the SVC can be phased out of a virtualized storage infrastructure, for example, after a trial period or after using the SVC as a data migration tool. Next, we describe how to migrate from a fully allocated volume to a thin-provisioned volume by using the volume mirroring feature and the thin-provisioned volume together. Finally, we provide you with examples of using intracluster Metro Mirror to migrate data.

Copyright IBM Corp. 2011. All rights reserved.

227

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

6.1 Migration overview


The SVC allows you to change the mapping of volume extents to managed disk (MDisk) extents, without interrupting host access to the volume. This functionality is utilized when performing volume migrations, and it applies to any volume that is defined on the SVC. This functionality can be used for these tasks: Migrating data from older back-end storage to SVC-managed storage Migrating data from one back-end controller to another back-end controller using the SVC as a data block mover, and afterwards removing the SVC from the SAN Migrating data from managed mode back into image mode prior to removing the SVC from a SAN Redistributing volumes and, therefore, the workload within an SVC cluster across back-end storage Moving workload onto newly installed storage Moving workload off of old or failing storage, ahead of decommissioning it Moving workload to rebalance a changed workload Migrating data from one SVC cluster to another SVC cluster

6.2 Migration operations


You can perform migration at either the volume or the extent level, depending on the purpose of the migration. The following migration activities are supported: Migrating extents within a storage pool, redistributing the extents of a given volume on the MDisks within the same storage pool Migrating extents off an MDisk, which is removed from the storage pool, to other MDisks in the same storage pool Migrating a volume from one storage pool to another storage pool Migrating a volume to change the virtualization type of the volume to image Migrating a volume between I/O Groups

6.2.1 Migrating multiple extents (within a storage pool)


You can migrate a number of volume extents at one time by using the migrateexts command. For detailed information about the migrateexts command parameters, use the SVC command-line interface help by typing this command: svctask migrateexts -h Or see IBM System Storage SAN Volume Controller Command-Line Interface Users Guide, GC27-2287. When executed, this command migrates a given number of extents from the source MDisk, where the extents of the specified volume reside, to a defined target MDisk that must be part of the same storage pool. You can specify a number of migration threads to be used in parallel (from 1 to 4).

228

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

If the type of the volume is image, then the volume type transitions to striped when the first extent is migrated. The MDisk access mode transitions from image to managed.

6.2.2 Migrating extents off an MDisk that is being deleted


When an MDisk is deleted from a storage pool using the rmmdisk -force command, any extents on the MDisk being used by a volume are first migrated off the MDisk and onto other MDisks in the storage pool prior to its deletion. In this case, the extents that need to be migrated are moved onto the set of MDisks that are not being deleted. This statement holds true if multiple MDisks are being removed from the storage pool at the same time. If a volume uses one or more extents that need to be moved as a result of an rmmdisk command, the virtualization type for that volume is set to striped (if it was previously sequential or image). If the MDisk is operating in image mode, the MDisk transitions to managed mode while the extents are being migrated. Upon deletion, it transitions to unmanaged mode.

Using the -force flag: If the -force flag is not used and if volumes occupy extents on one or more of the MDisks that are specified, the command fails. When the -force flag is used and if volumes occupy extents on one or more of the MDisks that are specified, all extents on the MDisks will be migrated to the other MDisks in the storage pool if there are enough free extents in the storage pool. The deletion of the MDisks is postponed until all extents are migrated, which can take time. In the case where there are insufficient free extents in the storage pool, the command fails.

6.2.3 Migrating a volume between storage pools


An entire volume can be migrated from one storage pool to another storage pool by using the migratevdisk command. A volume can be migrated between storage pools regardless of the virtualization type (image, striped, or sequential), although it transitions to the virtualization type of striped. The command varies, depending on the type of migration, as shown in Table 6-1.
Table 6-1 Migration types and associated commands Storage pool-to-storage pool type Managed to managed Image to managed Managed to image Image to image Command migratevdisk migratevdisk migratetoimage migratetoimage

Rule: For the migration to be acceptable, the source and destination storage pool must have the same extent size. Note that volume mirroring can also be used to migrate a volume between storage pools. This method can be used if the extent sizes of the two pools are not the same.

Chapter 6. Data migration

229

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 6-1 Managed volume migration to another storage pool

In Figure 6-1, we illustrate volume V3 migrating from Pool 2 to Pool 3. Extents are allocated to the migrating volume from the set of MDisks in the target storage pool, using the extent allocation algorithm. The process can be prioritized by specifying the number of threads that will be used in parallel (from 1 to 4) while migrating; using only one thread will put the least background load on the system. The offline rules apply to both storage pools. Therefore, referring back to Figure 6-1, if any of the M4, M5, M6, or M7 MDisks go offline, then the V3 volume goes offline. If the M4 MDisk goes offline, then V3 and V5 go offline, but V1, V2, V4, and V6 remain online. If the type of the volume is image, then the volume type transitions to striped when the first extent is migrated. The MDisk access mode transitions from image to managed. For the duration of the move, the volume is listed as being a member of the original storage pool. For the purposes of configuration, the volume moves to the new storage pool instantaneously at the end of the migration.

6.2.4 Migrating the volume to image mode


The facility to migrate a volume to an image mode volume can be combined with the ability to migrate between storage pools. The source for the migration can be a managed mode or an image mode volume. This leads to four possibilities: Migrate image mode-to-image mode within a storage pool. Migrate managed mode-to-image mode within a storage pool.

230

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

Migrate image mode-to-image mode between storage pools. Migrate managed mode-to-image mode between storage pools. These conditions must apply to be able to migrate: The destination MDisk must be greater than or equal to the size of the volume. The MDisk that is specified as the target must be in an unmanaged state at the time that the command is run. If the migration is interrupted by a cluster recovery, the migration will resume after the recovery completes. If the migration involves moving between storage pools, the volume behaves as described in 6.2.3, Migrating a volume between storage pools on page 229. Regardless of the mode in which the volume starts, it is reported as being in managed mode during the migration. Also, both of the MDisks involved are reported as being in image mode during the migration. Upon completion of the command, the volume is classified as an image mode volume.

6.2.5 Migrating a volume between I/O Groups


A volume can be migrated between I/O Groups by using the svctask chvdisk command. This command is only supported if the volume is not in a FlashCopy Mapping or Remote Copy relationship. To move a volume between I/O Groups, the cache must first be flushed. The SVC will attempt to destage all write data for the volume from the cache during the I/O Group move. This flush will fail if data has been pinned in the cache for any reason (such as an storage pool being offline). By default, this failed flush will cause the migration between I/O Groups to fail, but this behavior can be overridden using the -force flag. If the -force flag is used and if the SVC is unable to destage all write data from the cache, the result is that the contents of the volume are corrupted by the loss of the cached data. During the flush, the volume operates in cache write-through mode. Important: Do not move a volume to an offline I/O Group under any circumstance. You must ensure that the I/O Group is online before you move the volumes to avoid any data loss. You must quiesce host I/O before the migration for two reasons: If there is significant data in cache that takes a long time to destage, the command line will time out. Subsystem Device Driver (SDD) vpaths that are associated with the volume are deleted before the volume move takes place to avoid data corruption. So, data corruption can occur if I/O is still occurring for a particular logical unit number (LUN) ID. When migrating a volume between I/O Groups, you have the ability to specify the preferred node, if desired, or you can let SVC assign the preferred node. A volume that is a member of a FlashCopy mapping or a Remote Copy relationship cannot be moved to another I/O Group, and you cannot override this restriction by using the -force flag on the CLI command used to migrate the volume (chvdisk). You must delete the mapping or relationship before the volume can be migrated between I/O Groups.

Chapter 6. Data migration

231

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

6.2.6 Monitoring the migration progress


To monitor the progress of ongoing migrations use the CLI command: svcinfo lsmigrate To determine the extent allocation of MDisks and volumes, use the following commands. To list the volume IDs and the corresponding number of extents that the volumes occupy on the queried MDisk, use the following CLI command: svcinfo lsmdiskextent <mdiskname | mdisk_id> To list the MDisk IDs and the corresponding number of extents that the queried volumes occupy on the listed MDisks, use the following CLI command: svcinfo lsvdiskextent <vdiskname | vdisk_id> To list the number of available free extents on an MDisk, use the following CLI command: svcinfo lsfreeextents <mdiskname | mdisk_id> Important: After a migration has been started, there is no way for you to stop the migration. The migration runs to completion unless it is stopped or suspended by an error condition, or if the volume being migrated is deleted. If you want the ability to start, suspend, or cancel a migration or control the rate of migration, consider using the volume mirroring function or migrating volumes between storage pools.

6.3 Functional overview of migration


This section describes the functional view of data migration.

6.3.1 Parallelism
You can perform several of the following activities in parallel.

Per cluster
An SVC cluster supports up to 32 active concurrent instances of members of the set of migration activities: Migrate multiple extents Migrate between storage pools Migrate off of a deleted MDisk Migrate to image mode These high-level migration tasks operate by scheduling single extent migrations: Up to 256 single extent migrations can run concurrently. This number is made up of single extent migrates, which result from the operations previously listed. The Migrate Multiple Extents and Migrate Between storage pools commands support a flag that allows you to specify the number of parallel threads to use, between 1 and 4. This parameter affects the number of extents that will be concurrently migrated for that migration operation. Thus, if the thread value is set to 4, up to four extents can be migrated concurrently for that operation, subject to other resource constraints.

232

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

Per MDisk
The SVC supports up to four concurrent single extent migrates per MDisk. This limit does not take into account whether the MDisk is the source or the destination. If more than four single extent migrates are scheduled for a particular MDisk, further migrations are queued pending the completion of one of the currently running migrations.

6.3.2 Error handling


If a medium error occurs on a read from the source; if the destinations medium error table is full; if an I/O error occurs on a read from the source repeatedly; or if the MDisks go offline repeatedly, then the migration is suspended or stopped. The migration will be suspended if any of the following conditions exist. Otherwise, it will be stopped: The migration is between storage pools and has progressed beyond the first extent. These migrations are always suspended rather than stopped, because stopping a migration in progress leaves a volume spanning storage pools, which is not a valid configuration other than during a migration. The migration is a Migrate to Image Mode (even if it is processing the first extent). These migrations are always suspended rather than stopped, because stopping a migration in progress leaves the volume in an inconsistent state. A migration is waiting for a metadata checkpoint that has failed. If a migration is stopped, and if any migrations are queued awaiting the use of the MDisk for migration, these migrations now commence. However, if a migration is suspended then the migration continues to use resources, and so, another migration is not started. The SVC attempts to resume the migration if the error log entry is marked as fixed using the CLI or the GUI. If the error condition no longer exists, the migration will proceed. The migration might resume on a node other than the node that started the migration.

6.3.3 Migration algorithm


This section describes the effect of the migration algorithm.

Chunks
Regardless of the extent size for the storage pool, data is migrated in units of 16 MB. In this description, this unit is referred to as a chunk. We describe the algorithm that is used to migrate an extent: 1. Pause (pause means to queue all new I/O requests in the virtualization layer in SVC and to wait for all outstanding requests to complete) all I/O on the source MDisk on all nodes in the SVC cluster. The I/O to other extents is unaffected. 2. Unpause (resume) I/O on all of the source MDisk extents apart from writes to the specific chunk that is being migrated. Writes to the extent are mirrored to the source and destination. 3. On the node that is performing the migration, for each 256 KB section of the chunk: Synchronously read 256 KB from the source. Synchronously write 256 KB to the target. 4. After the entire chunk has been copied to the destination, repeat the process for the next chunk within the extent.
Chapter 6. Data migration

233

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

5. After the entire extent has been migrated, pause all I/O to the extent being migrated, perform a checkpoint on the extent move to on-disk metadata, redirect all further reads to the destination, and stop mirroring writes (writes only to destination). 6. If the checkpoint fails, the I/O is unpaused. During the migration, the extent can be divided into three regions, as shown in Figure 6-2. Region B is the chunk that is being copied. Writes to Region B are queued (paused) in the virtualization layer waiting for the chunk to be copied. Reads to Region A are directed to the destination, because this data has already been copied. Writes to Region A are written to both the source and the destination extent to maintain the integrity of the source extent. Reads and writes to Region C are directed to the source, because this region has yet to be migrated. The migration of a chunk requires 64 synchronous reads and 64 synchronous writes. During this time, all writes to the chunk from higher layers in the software stack (such as cache destages) are held back. If the back-end storage is operating with significant latency, it is possible that this operation might take time (minutes) to complete, which can have an adverse affect on the overall performance of the SVC. To avoid this situation, if the migration of a particular chunk is still active after one minute, the migration is paused for 30 seconds. During this time, writes to the chunk are allowed to proceed. After 30 seconds, the migration of the chunk is resumed. This algorithm is repeated as many times as necessary to complete the migration of the chunk.

Managed Disk Extents Extent N-1 Extent N Extent N+1

Region A (already copied) reads/writes go to destination

Region B (copying) reads/writes paused

Region C (yet to be copied) reads/writes go to source

16 MB
Figure 6-2 Migrating an extent

Not to scale

SVC guarantees read stability during data migrations even if the data migration is stopped by a node reset or a cluster shutdown. This read stability is possible because SVC disallows writes on all nodes to the area being copied, and upon a failure, the extent migration is restarted from the beginning. At the conclusion of the operation, we will have these results: Extents migrated in 16 MB chunks, one chunk at a time. Chunks are either copied, in progress, or not copied. When the extent is finished, its new location is saved.

234

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

Figure 6-3 shows the data migration and write operation relationship.

Figure 6-3 Migration and write operation relationship

6.4 Migrating data from an image mode volume


This section describes migrating data from an image mode volume to a fully managed volume. This is the type of migration used to take an existing host LUN and move it into the virtualisation environment as provided by the SVC.

6.4.1 Image mode volume migration concept


First, we describe the concepts associated with this operation.

MDisk modes
There are three MDisk modes: Unmanaged MDisk An MDisk is reported as unmanaged when it is not a member of any storage pool. An unmanaged MDisk is not associated with any volumes and has no metadata stored on it. The SVC will not write to an MDisk that is in unmanaged mode except when it attempts to change the mode of the MDisk to one of the other modes. Image mode MDisk Image mode provides a direct block-for-block translation from the MDisk to the volume with no virtualization. Image mode volumes have a minimum size of one block (512 bytes) and always occupy at least one extent. An image mode MDisk is associated with exactly one volume. Managed mode MDisk Managed mode Mdisks contribute extents to the pool of available extents in the storage pool. Zero or more managed mode volumes might use these extents.

Transitions between the modes


The following state transitions can occur to an MDisk (see Figure 6-4 on page 236): Unmanaged mode to managed mode This transition occurs when an MDisk is added to a storage pool, which makes the MDisk eligible for the allocation of data and metadata extents.

Chapter 6. Data migration

235

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Managed mode to unmanaged mode This transition occurs when an MDisk is removed from a storage pool. Unmanaged mode to image mode This transition occurs when an image mode MDisk is created on an MDisk that was previously unmanaged. It also occurs when an MDisk is used as the target for a migration to image mode. Image mode to unmanaged mode There are two distinct ways in which this transition can happen: When an image mode volume is deleted. The MDisk that supported the volume becomes unmanaged. When an image mode volume is migrated in image mode to another MDisk, the MDisk that is being migrated from remains in image mode until all data has been moved off of it. It then transitions to unmanaged mode. Image mode to managed mode This transition occurs when the image mode volume that is using the MDisk is migrated into managed mode. Managed mode to image mode is impossible There is no operation that will take an MDisk directly from managed mode to image mode. You can achieve this transition by performing operations that convert the MDisk to unmanaged mode and then to image mode.
add to group

Not in group
remove from group

Managed mode

delete image mode vdisk start migrate to managed mode

complete migrate

create image mode vdisk

Migrating to image mode

start migrate to image mode

Image mode

Figure 6-4 Various states of a volume

Image mode volumes have the special property that the last extent in the volume can be a partial extent. Managed mode disks do not have this property. To perform any type of migration activity on an image mode volume, the image mode disk must first be converted into a managed mode disk. If the image mode disk has a partial last

236

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

extent, this last extent in the image mode volume must be the first extent to be migrated. This migration is handled as a special case. After this special migration operation has occurred, the volume becomes a managed mode volume and is treated in the same way as any other managed mode volume. If the image mode disk does not have a partial last extent, no special processing is performed. The image mode volume is simply changed into a managed mode volume and is treated in the same way as any other managed mode volume. After data is migrated off a partial extent, there is no way to migrate data back onto the partial extent.

6.4.2 Migration tips


Several methods are available to migrate an image mode volume to a managed mode volume. If your image mode volume is in the same storage pool as the MDisks on which you want to migrate the extents, you can perform one of these migrations: Migrate a single extent. You have to migrate the last extent of the image mode volume (number N-1). Migrate multiple extents. Migrate all of the in-use extents from an MDisk. Migrate extents off an MDisk that is being deleted. If you have two storage pools, one storage pool for the image mode volume, and one storage pool for the managed mode volumes, you can migrate a volume from one storage pool to another storage pool. Have one storage pool for all the image mode volumes, and other storage pools for the managed mode volumes, and use the migrate volume facility. Be sure to verify that enough extents are available in the target storage pool.

6.5 Data migration for Windows using the SVC GUI


In this section, we move two LUNs from a Windows Server 2008 server that is currently attached to a LSI 3500 storage subsystem over to the SVC. The migration examples include: Moving a Microsoft servers SAN LUNs from a storage subsystem and virtualizing those same LUNs through the SVC Perform this activity when introducing the SVC into your environment. This section shows that your host downtime is only a few minutes while you remap and remask disks using your storage subsystem LUN management tool. We describe this step in detail in 6.5.2, Adding the SVC between the host system and the LSI 3500 on page 241. Migrating your image mode volume to a volume while your host is still running and servicing your business application Perform this activity if you are removing a storage subsystem from your SAN environment, or if you want to move the data onto LUNs that are more appropriate for the type of data stored on those LUNs, taking into account availability, performance, and redundancy. We describe this step in 6.5.6, Migrating the volume from image mode to image mode on page 268.

Chapter 6. Data migration

237

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Migrating your volume to an image mode volume Perform this activity if you are removing the SVC from your SAN environment after a trial period. We describe this step in detail in 6.5.5, Migrating a volume from managed mode to image mode on page 263. Moving an image mode volume to another image mode volume Use this procedure to migrate data from one storage subsystem to another storage subsystem. We describe this step in detail in 6.6.6, Migrating the volumes to image mode volumes on page 299. You can use these activities individually or together to migrate your servers LUNs from one storage subsystem to another storage subsystem using the SVC as your migration tool. The only downtime that is required for these activities is the time that it takes you to remask and remap the LUNs between the storage subsystems and your SVC.

6.5.1 Windows Server 2008 host system connected directly to the LSI 3500
In our example configuration, we use a Windows Server 2008 host and a LSI 3500 Storage Box. The host has two LUNs (drive X and Y). The two LUNs are part of one LSI 3500 array. Before the migration, LUN masking is defined in the LSI 3500 to give access to the Windows Server 2008 host system for the volumes from LSI 3500 labeled X and Y (see Figure 6-6 on page 239). Figure 6-5 shows the starting zoning scenario.

Figure 6-5 Starting zoning scenario

Figure 6-6 on page 239 shows the two LUNs (drive X and Y).

238

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

Figure 6-6 Drives X and Y

Chapter 6. Data migration

239

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 6-7 shows the properties of one of the LSI 3500 disks using the Subsystem Device Driver DSM (SDDDSM). The disk appears as an LSI INF-01-00 Multipath Disk Device.

Figure 6-7 Disk properties

240

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

6.5.2 Adding the SVC between the host system and the LSI 3500
Figure 6-8 shows the new environment with the SVC and a second storage subsystem attached to the SAN. The second storage subsystem is not required to migrate to the SVC, but in the following examples, we show that it is possible to move data across storage subsystems without any host downtime.

Figure 6-8 Add SVC and second storage subsystem

To add the SVC between the host system and the LSI 3500 storage subsystem, perform the following steps: 1. Check that you have installed supported device drivers on your host system. 2. Check that your SAN environment fulfills the supported zoning configurations. 3. Shut down the host. 4. Change the LUN masking in the LSI 3500. Mask the LUNs to the SVC, and remove the masking for the host. Figure 6-9 on page 242 shows the two LUNs with LUN IDs 10 and 11 remapped to SVC ITSOSVC1.

Chapter 6. Data migration

241

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 6-9 LUNs remapped

Attention: To avoid potential data loss, back up all the data stored on your external storage before using the wizard. 5. Logon to your SVC Console and open Pools and System Migration; see Figure 6-10.

242

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

Figure 6-10 Pools and System Migration

6. Click Start New Migration; this will start a wizard as shown in Figure 6-11 on page 243.

Figure 6-11 Start New Migration

7. Follow the Storage Migration Wizard as shown in Figure 6-12, then click Next.
Chapter 6. Data migration

243

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 6-12 Migration Wizard - Step 1 of 8

8. Figure 6-13 on page 245 shows the Prepare Environment for Migration information; click Next.

244

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

Figure 6-13 Migration Wizard - Step 2 of 8 - preparing the environment for migration

9. Click Next to complete Step 3; see Figure 6-14.

Figure 6-14 Migration Wizard - Step 3 of 8 - mapping storage

10.Figure 6-15 on page 246 shows device discovery; click Close.

Chapter 6. Data migration

245

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 6-15 Discovering devices

11.Figure 6-16 shows the available MDisks for Migration; click Next.

Figure 6-16 Migration Wizard - Step 4 of 8

12.Mark both MDisks for migrating as shown in Figure 6-17 on page 247, and then click Next.

246

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

Figure 6-17 Migration Wizard - selecting disks for migration

13.Figure 6-18 shows the MDisk import process. During the import process a new storage pool is automatically created, in our case Migrationpool_8192. You can see the command that the wizard is issuing is creating an image mode volume with a one-to-one mapping to mdisk5. Click Close to continue.

Figure 6-18 Migration Wizard - Step 5 of 8 - MDisk import process

14.Now we create a new host object that we will later map the volume to. Click New Host as shown in Figure 6-19 on page 248.

Chapter 6. Data migration

247

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 6-19 Migration Wizard - creating a new host

15.Figure 6-20 shows the empty fields that we need to complete to match our host requirements.

248

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

Figure 6-20 Migration Wizard - host information fields

16.Here you type the name you want to use for the Host, add the Fibre Channel port, and then select a Host Type. In our case, the name is W2k8_Server. Click Create Host as shown in Figure 6-21 on page 250.

Chapter 6. Data migration

249

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 6-21 Migration Wizard - completed host information

17.Figure 6-22 shows the progress of creating a host. Click Close.

250

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

Figure 6-22 Progress status - creating a host

18.Figure 6-23 on page 251 shows that the host was created successfully. Click Next to continue.

Figure 6-23 Migration Wizard - host creation was successful

19.Figure 6-24 shows all the available volumes to map to a host. Click Next to continue.

Chapter 6. Data migration

251

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 6-24 Migration Wizard - Step 6 of 8 - volumes available for mapping

20.Mark both volumes and click Map to Host as shown in Figure 6-25 on page 252.

Figure 6-25 Migration Wizard - mapping volumes to host

21.Modify Mapping by choosing the host using the drop-down menu as shown in Figure 6-26, and then click Next.

Figure 6-26 Migration Wizard - Modify Host Mapping

252

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

22.The rightmost side of Figure 6-27 on page 253 shows the volumes that can be marked to map to your host. Mark both volumes and click Apply.

Figure 6-27 Migration Wizard - volume mapping to host

23.Figure 6-28 shows the progress of the volume mapping to host. Click Close when finished.

Figure 6-28 Modify Mappings - task completed

24.After the volume to host mapping task is completed, notice that beneath the column heading Host Mapping a host is shown marked Yes; see Figure 6-29 on page 254. Click Next.

Chapter 6. Data migration

253

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 6-29 Migration Wizard - Map Volumes to Hosts

25.Select the storage pool you want to use for migration, in our case STGPool_DS3500-2 as shown in Figure 6-30, and click Next.

Figure 6-30 Migration Wizard - Step 7 - selecting a storage pool to use for migration

26.Migration starts automatically by doing a volume copy, as shown in Figure 6-31 on page 255.

254

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

Figure 6-31 Start Migration - task completed

27.Figure 6-32 then appears, advising that migration has begun. Click Finish.

Figure 6-32 Migration Wizard - Step 8 of 8 - data migration has begun

28.The window in Figure 6-33 on page 256 will appear automatically to show the progress of the migration.

Chapter 6. Data migration

255

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 6-33 Progress of migration process

29.Go to Volumes Volumes by host as shown in Figure 6-34 to see all the volumes served by the newly created host for this migration step.

Figure 6-34 Selecting to view volumes by host

30.Figure 6-35 on page 257 shows all the volumes (copy0* and copy1) served by the created host.

256

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

Figure 6-35 Volumes served by host

You can see in Figure 6-35 that the migrated volume is actually a mirrored volume with one copy on the image mode pool and another copy in a managed mode storage pool. The administrator can choose to leave the volume like this or split the initial copy from the mirror.

6.5.3 Importing the migrated disks into an online Windows Server 2008 host
To import the migrated disks into an online Windows 2008 Server host, perform these steps: 1. Start the Windows Server 2008 host system again, and Go to the Device Manager to see the new disk properties changed to a 2145 Multi-Path Disk Device (Figure 6-36 on page 258).

Chapter 6. Data migration

257

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 6-36 Device manager - see the new disk properties

2. Figure 6-37 shows the Disk Management window.

Figure 6-37 Migrated disks are available

258

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

3. Select Start All Programs Subsystem Device Driver DSM Subsystem Device Driver DSM to open the SDDDSM command-line utility; see Figure 6-38.

Figure 6-38 Subsystem Device Driver DSM CLI

4. Enter the datapath query device command to check whether all paths are available, as planned in your SAN environment; see Example 6-1.
Example 6-1 The datapath query device command

Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft Corporation.

All rights reserved.

C:\Program Files\IBM\SDDDSM>datapath query device Total Devices : 2

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801AF813F1000000000000029 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port7 Bus0/Disk1 Part0 OPEN NORMAL 145 0 1 Scsi Port7 Bus0/Disk1 Part0 OPEN NORMAL 75 0 2 Scsi Port8 Bus0/Disk1 Part0 OPEN NORMAL 73 0 3 Scsi Port8 Bus0/Disk1 Part0 OPEN NORMAL 0 0 4 Scsi Port8 Bus0/Disk1 Part0 OPEN NORMAL 0 0 5 Scsi Port7 Bus0/Disk1 Part0 OPEN NORMAL 0 0 6 Scsi Port7 Bus0/Disk1 Part0 OPEN NORMAL 0 0 7 Scsi Port8 Bus0/Disk1 Part0 OPEN NORMAL 76 0

Chapter 6. Data migration

259

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801AF813F100000000000002A ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port7 Bus0/Disk2 Part0 OPEN NORMAL 0 0 1 Scsi Port7 Bus0/Disk2 Part0 OPEN NORMAL 0 0 2 Scsi Port8 Bus0/Disk2 Part0 OPEN NORMAL 0 0 3 Scsi Port8 Bus0/Disk2 Part0 OPEN NORMAL 94 0 4 Scsi Port8 Bus0/Disk2 Part0 OPEN NORMAL 77 0 5 Scsi Port7 Bus0/Disk2 Part0 OPEN NORMAL 76 0 6 Scsi Port8 Bus0/Disk2 Part0 OPEN NORMAL 0 0 7 Scsi Port7 Bus0/Disk2 Part0 OPEN NORMAL 68 0 C:\Program Files\IBM\SDDDSM>

6.5.4 Adding the SVC between the host and LSI3500 using the CLI
In this section we only use CLI commands to add direct attached storage to the SVCs managed storage. To read about our preparation of the environment see 6.5.1, Windows Server 2008 host system connected directly to the LSI 3500 on page 238.

Verifying the currently used storage pools


Verify the currently used storage pool on the SVC, as shown in Example 6-2, to get an idea of the storage pools free capacity.
Example 6-2 Storage pools free capacity
IBM_2145:ITSO_SVC1:admin>svcinfo lsmdiskgrp -delim " " id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning easy_tier easy_tier_status 0 STGPool_DS3500-1 online 3 0 382.50GB 256 382.50GB 0.00MB 0.00MB 0.00MB 0 0 auto inactive 1 STGPool_DS3500-2 online 3 2 384.00GB 256 354.00GB 30.00GB 30.00GB 30.00GB 7 0 auto inactive 3 STGPool_Multi_Tier online 2 0 20.00GB 256 20.00GB 0.00MB 0.00MB 0.00MB 0 0 auto inactive 4 MigrationPool_8192 online 2 2 30.00GB 8192 0 30.00GB 30.00GB 30.00GB 100 0 auto inactive IBM_2145:ITSO_SVC1:admin>

Creating a storage pool


When we move the two LUNs to the SVC, we use them initially in image mode. Therefore, we need a storage pool to hold those disks. First, we add a new empty storage pool for the import of the LUNs as shown in Example 6-3, in our case imagepool, because it is better to have one separate pool in case something happens during the import, and the import process will not be able to affect the other storage pools.
Example 6-3 Adding a storage pool

IBM_2145:ITSO_SVC1:admin>svctask mkmdiskgrp -name imagepool -tier generic_hdd -easytier off -ext 256 MDisk Group, id [2], successfully created IBM_2145:ITSO_SVC1:admin>

Verifying the new storage pool has been created


Now we verify whether the new storage pool has been added correctly, as shown in Example 6-4.

260

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

Example 6-4 Verifying the new storage pool

IBM_2145:ITSO_SVC1:admin>svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning easy_tier easy_tier_status 0 STGPool_DS3500-1 online 3 0 382.50GB 256 382.50GB 0.00MB 0.00MB 0.00MB 0 0 auto inactive 1 STGPool_DS3500-2 online 3 2 384.00GB 256 354.00GB 30.00GB 30.00GB 30.00GB 7 0 auto inactive 2 imagepool online 0 0 0 256 0 0.00MB 0.00MB 0.00MB 0 0 off inactive 3 STGPool_Multi_Tier online 2 0 20.00GB 256 20.00GB 0.00MB 0.00MB 0.00MB 0 0 auto inactive 4 MigrationPool_8192 online 2 2 30.00GB 8192 0 30.00GB 30.00GB 30.00GB 100 0 auto inactive IBM_2145:ITSO_SVC1:admin>

Creating the image volume


As shown in Example 6-5 we need to create two image volumes (image1 and image2) within our storage pool imagepool, one for each MDisk to import LUNs from the storage controller to within the SVC.
Example 6-5 Creating the image volume

IBM_2145:ITSO_SVC1:admin>svctask mkvdisk -name image1 -iogrp 0 -mdiskgrp imagepool -vtype image -mdisk mdisk11 -syncrate 80 Virtual Disk, id [0], successfully created IBM_2145:ITSO_SVC1:admin>svctask mkvdisk -name image2 -iogrp 0 -mdiskgrp imagepool -vtype image -mdisk mdisk12 -syncrate 80 Virtual Disk, id [1], successfully created IBM_2145:ITSO_SVC1:admin>

Verifying the image volumes


Now we check again whether the volumes are created within the storage pool imagepool, as shown in Example 6-6.
Example 6-6 Verifying the image volumes

IBM_2145:ITSO_SVC1:admin>svcinfo lsvdisk id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state se_copy_count compressed_copy_count RC_change 0 image1 0 io_grp0 online 2 imagepool 20.00GB image 6005076801AF813F100000000000002B 0 1 empty 0 0 no 1 image2 0 io_grp0 online 2 imagepool 10.00GB image 6005076801AF813F100000000000002C 0 1 empty 0 0 no

Chapter 6. Data migration

261

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Creating the host


We check whether our host exists or if we need to create it, as shown in Example 6-7. In our case the server has already been created.
Example 6-7 Listing the host

IBM_2145:ITSO_SVC1:admin>svcinfo lshost id name port_count iogrp_count status 0 W2K8_HYPERV1 2 4 online IBM_2145:ITSO_SVC1:admin>

Mapping the image volumes to the host


Next, we map the image volumes to host W2K8_Server as shown in Example 6-8; this is also known as LUN masking.
Example 6-8 Mapping the volumes

IBM_2145:ITSO_SVC1:admin> IBM_2145:ITSO_SVC1:admin>svctask mkvdiskhostmap -force -host W2K8_HYPERV1 -scsi 0 image1 Virtual Disk to Host map, id [0], successfully created IBM_2145:ITSO_SVC1:admin>svctask mkvdiskhostmap -force -host W2K8_HYPERV1 -scsi 1 image2 Virtual Disk to Host map, id [1], successfully created

Adding the image volumes to a storage pool


Add the image volumes to storage pool STGPool_DS3500-2 as shown in Example 6-9 to have them mapped as fully allocated volumes that are managed by the SVC.
Example 6-9 Adding volumes to storage pool

IBM_2145:ITSO_SVC1:admin>svctask addvdiskcopy -mdiskgrp STGPool_DS3500-2 image1 Vdisk [0] copy [1] successfully created IBM_2145:ITSO_SVC1:admin>svctask addvdiskcopy -mdiskgrp STGPool_DS3500-2 image2 Vdisk [1] copy [1] successfully created

Checking the status of the volumes


Both volumes now have a second copy (shown as type many in Example 6-10) and are available to be used by the host.
Example 6-10 Status check

IBM_2145:ITSO_SVC1:admin>svcinfo lsvdisk id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state se_copy_count compressed_copy_count RC_change 0 image1 0 io_grp0 online many many 20.00GB many 6005076801AF813F100000000000002B 0 2 empty 0 0 no 1 image2 0 io_grp0 online many many 10.00GB many 6005076801AF813F100000000000002C 0 2 empty 0 0 no IBM_2145:ITSO_SVC1:admin>

262

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

6.5.5 Migrating a volume from managed mode to image mode


In this section, we migrate a managed volume to an image mode volume by performing these steps: 1. We create an empty storage pool for each volume that we want to migrate to image mode. These storage pools will host the target MDisk that we will map later to our server at the end of the migration. 2. We go to Pools Mdisk by Pools and create a new pool from the drop-down menu, as shown in Figure 6-39.

Figure 6-39 Selecting Pools

3. To create an empty storage pool for migration, perform Step 1 and Step 2 as shown in Figure 6-40 on page 264 and Figure 6-41 on page 264.

Chapter 6. Data migration

263

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 6-40 Create Storage Pool - Step 1 of 2

Next, click Finish; see Figure 6-41.

Figure 6-41 Create Storage Pool - Step 2

4. Figure 6-42 reminds you that an empty storage pool has been created. Click OK.

264

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

Figure 6-42 Reminder

5. Figure 6-43 on page 265 shows the progress status of creating a storage pool for migration. Click Close to continue.

Figure 6-43 Create storage pool - progress status

6. From the Volumes > All Volumes panel, select the volume that you want to migrate to image mode and select Export to Image Mode from the drop-down menu as shown in Figure 6-44.

Chapter 6. Data migration

265

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 6-44 Select volume

7. Select the MDisk to migrate the volume onto, as shown in Figure 6-45 on page 266, and then click Next.

Figure 6-45 Migrate to an Image Mode

266

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

8. Select a storage pool in which the image mode volume will be placed after migration is completed, in our case for migration, and click Finish; see Figure 6-46.

Figure 6-46 Select storage pool

9. The volume is exported to image mode and placed in the For Migration pool; see Figure 6-47. Click Close.

Figure 6-47 Export Volume to image process

10.Navigate to the Pools >MDisk by Pools section; click on the + (expand button) - notice that MDisk6 is now an image mode MDisk as shown in Figure 6-48.

Chapter 6. Data migration

267

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 6-48 MDisk is in image mode

11.Repeat these steps for every volume that you want to migrate to an image mode volume. 12.Delete the image mode data from the SVC by using the procedure described in 6.5.7, Removing image mode data from the SVC on page 278.

6.5.6 Migrating the volume from image mode to image mode


Use the volume migration from image mode to image mode process to move image mode volumes from one storage subsystem to another storage subsystem without going through the SVC fully managed mode. The data stays available for the applications during this migration. This procedure is nearly the same as the procedure described in 6.5.5, Migrating a volume from managed mode to image mode on page 263. In our example, we migrate the windows server W2k8_Log volume to another disk subsystem as an image mode volume. The second storage subsystem is a LSI 5100; a new LUN is

268

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

configured on the storage and mapped to the SVC cluster. The LUN is available to the SVC as an unmanaged MDisk8 as shown in Figure 6-49.

Figure 6-49 Unmanaged disk on a storage subsystem

To migrate the image mode volume to another image mode volume, perform the following steps: 1. Mark the unmanaged MDisk8 and click either Actions or the right-side mouse button and select Import from the list as shown in Figure 6-50.

Figure 6-50 Import the unmanaged MDisk into SVC

Chapter 6. Data migration

269

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

2. The Introduction window opens describing the process of importing the MDisk and mapping an image mode volume to it, as shown in Figure 6-51. Click Next.

Figure 6-51 Import Wizard - Step1 of 2

3. Do not select a target pool because you do not want to migrate into an SVC managed volume pool. Instead, simply click Finish; see Figure 6-52 on page 270.

Figure 6-52 Import Wizard - Step 2

4. Figure 6-53 shows a warning message indicating a storage pool has not been selected and the volume will remain in the temporary pool. Click OK to continue.

270

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

Figure 6-53 Warning message - storage pool not selected

5. The import process starts, as shown in Figure 6-54, by creating a temporary storage pool Migrationpool_8192 (8 GB) and an image volume. Click Close to continue.

Figure 6-54 Import of MDisk and creation of temporary storage pool Migrationpool_8192

Chapter 6. Data migration

271

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

6. As shown in Figure 6-55, there is now an image mode mdisk8 with the import controller name and SCSI ID as its name.

Figure 6-55 Imported mdisk8 within the created storage pool

7. Now create a new storage pool Migration_out (with a same extent size (8 GB) as the automatically created storage pool Migrationpool_8192) for transferring the image mode disk. Go to Pools Mdisks byPool, as shown in Figure 6-56.

Figure 6-56 Pools

8. Click New Pool to create an empty storage pool, as shown in Figure 6-57.

272

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

Figure 6-57 Create a new storage pool

9. Give your new storage pool the meaningful name Migration_out and click the Advanced Settings drop-down menu. Choose 8 GB as the extent size for your new storage pool, as shown in Figure 6-58.

Figure 6-58 Step 1 of 2 - create an empty storage pool with extent size 8 GB

10.Figure 6-59 shows a storage pool window without any disks. Click Finish to continue to create an empty storage pool.

Chapter 6. Data migration

273

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 6-59 Step 2 no disks

11.The warning in Figure 6-60 on page 274 pops up to remind you that an empty storage pool will be created. Click OK to continue.

Figure 6-60 Warning message - creating an empty storage pool

12.Figure 6-61 shows the progress of creating the storage pool Migration_out. Click Close to continue.

Figure 6-61 Progress of storage pool creation

274

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

13.The empty storage pool for image to image migration has been created. Go to Volumes Volumes by Pool as shown in Figure 6-62.

Figure 6-62 Storage pool created

14.Select the storage pool of the imported disk, Migrationpool_8192 in the left panel. Then mark the image disk you want to migrate out and select Actions. From the drop-down menu select Export to Image Mode, as shown in Figure 6-63.

Figure 6-63 Export to Image Mode

Chapter 6. Data migration

275

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

15.Select the target MDisk on the new disk controller that you want to migrate to. Click Next, as shown in Figure 6-64.

Figure 6-64 Step 1 of 2 - select target MDisk

16.Select the target migrate out (empty) storage pool, as shown in Figure 6-65. Click Finish.

Figure 6-65 Step 2 - select target storage pool

17.Figure 6-66 shows the progress status of the Export Volume to Image process. Click Close to continue.

276

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

Figure 6-66 Export Volume to Image progress status

18.Figure 6-67 on page 277 shows that the MDisk location has changed as expected to the new storage pool Migration_out.

Figure 6-67 Image disk migrated to new storage pool

19.Repeat these steps for all image mode volumes that you want to migrate. 20.If you want to delete the data from the SVC, use the procedure described in 6.5.7, Removing image mode data from the SVC on page 278.

Chapter 6. Data migration

277

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

6.5.7 Removing image mode data from the SVC


If your data resides in an image mode volume inside the SVC, you can remove the volume from the SVC, which allows you to free up the original LUN for reuse. The following sections illustrate how to migrate data to an image mode volume. Depending on your environment, you might have to follow these procedures before deleting the image volume: 6.5.5, Migrating a volume from managed mode to image mode on page 263 6.5.6, Migrating the volume from image mode to image mode on page 268 To remove the image mode volume from the SVC, we use the delete vdisk command.

278

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

If the command succeeds on an image mode volume, the underlying back-end storage controller will be consistent with the data that a host might previously have read from the image mode volume; that is, all fast write data will have been flushed to the underlying LUN. Deleting an image mode volume causes the MDisk that is associated with the volume to be ejected from the storage pool. The mode of the MDisk will be returned to unmanaged. Note: This situation only applies to image mode volumes. If you delete a normal volume, all of the data will also be deleted. As shown in Example 6-1 on page 259, the SAN disks currently reside on the SVC 2145 device. Check that you have installed the supported device drivers on your host system. To switch back to the storage subsystem, perform the following steps: 1. Shut down your host system. 2. Open the view Volumes by Host window to see which volumes are currently mapped to your host as shown in Figure 6-68.

Figure 6-68 Volume by host mapping

3. Check your Host and select your volume. Then, show the drop-down menu by clicking the right mouse button and select Unmap all Hosts as shown in Figure 6-69 on page 280.

Chapter 6. Data migration

279

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 6-69 Unmap volume from host

4. Verify your unmap process, as shown in Figure 6-70, and click Unmap.

Figure 6-70 Verify your unmapping process

5. Figure 6-71 shows that the volume has been removed from the SVC.

280

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

Figure 6-71 Volume has been removed from host

6. Repeat steps 3 to 5 for every image mode volume that you want to remove from the SVC. 7. Edit the LUN masking on your storage subsystem. Remove the SVC from the LUN masking, and add the host to the masking. 8. Power on your host system.

6.5.8 Map the free disks onto the Windows Server 2008
To detect and map the disks which have been freed from SVC management, go to the WIndows Server 2008: 1. Using your LSI 3500 Storage Manager interface, now remap the two LUNs that were MDisks back to your Windows Server 2008 server. 2. Open your Device Manager window. Figure 6-72 on page 282 shows that the LUNs are now back to an LSI INF 01-00 type.

Chapter 6. Data migration

281

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 6-72 LSI INF 01-00 type

3. Open your Disk Management window and notice that the disks have appeared. You might need to reactivate your disk by using the right-click option on each disk.

282

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

Figure 6-73 Windows Server 2008 Disk Management

6.6 Migrating Linux SAN disks to SVC disks


In this section, we move the two LUNs from a Linux server that is currently booting directly off of our DS4000 storage subsystem over to the SVC. We then manage those LUNs with SVC, move them between other managed disks and finally, move them back to image mode disks so that those LUNs can be masked and mapped back to the Linux server directly. Using this example can help you to perform any of the following activities in your environment: Move a Linux servers SAN LUNs from a storage subsystem and virtualize those same LUNs through the SVC. Perform this activity first when introducing the SVC into your environment. This section shows that your host downtime is only a few minutes while you remap and remask disks using your storage subsystem LUN management tool. We describe this step in detail in 6.6.2, Preparing your SVC to virtualize disks on page 286. Move data between storage subsystems while your Linux server is still running and servicing your business application. Perform this activity if you are removing a storage subsystem from your SAN environment. Or, perform this activity if you want to move the data onto LUNs that are more appropriate for the type of data that is stored on those LUNs, taking availability, performance, and redundancy into account. We describe this step in 6.6.4, Migrating the image mode volumes to managed MDisks on page 293. Move your Linux servers LUNs back to image mode volumes so that they can be remapped and remasked directly back to the Linux server. We describe this step in 6.6.5, Preparing to migrate from the SVC on page 296.
Chapter 6. Data migration

283

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

You can use these three activities individually, or together, to migrate your Linux servers LUNs from one storage subsystem to another storage subsystem using the SVC as your migration tool. If you do not use all three activities, you can introduce or remove the SVC from your environment. The only downtime required for these activities is the time that it takes to remask and remap the LUNs between the storage subsystems and your SVC. In Figure 6-74, we show our Linux environment.

Zoning for migration scenarios LINUX Host

SAN

IBM or OEM Storage Subsystem

Green Zone

Figure 6-74 Linux SAN environment

Figure 6-74 shows our Linux server connected to our SAN infrastructure. It has two LUNs that are masked directly to it from our storage subsystem: The LUN with SCSI ID 0 has the host operating system (our host is Red Hat Enterprise Linux V5.1), and this LUN is used to boot the system directly from the storage subsystem. The operating system identifies it as /dev/mapper/VolGroup00-LogVol00. SCSI LUN ID 0: To successfully boot a host off of the SAN, you must have assigned the LUN as SCSI LUN ID 0. Linux sees this LUN as our /dev/sda disk. We have also mapped a second disk (SCSI LUN ID 1) to the host. It is 5 GB in size, and it is mounted in the / data folder on the /dev/dm-2 disk. Example 6-11 on page 285 shows our disks that are directly attached to the Linux hosts.

284

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

Example 6-11 Directly attached disks

[root@Palau data]# df Filesystem 1K-blocks /dev/mapper/VolGroup00-LogVol00 10093752 /dev/sda1 101086 tmpfs 1033496 /dev/dm-2 5160576 [root@Palau data]#

Used Available Use% Mounted on 1971344 12054 0 158160 7601400 83813 1033496 4740272 21% 13% 0% 4% / /boot /dev/shm /data

Our Linux server represents a typical SAN environment with a host directly using LUNs that were created on a SAN storage subsystem, as shown in Figure 6-74 on page 284: The Linux servers host bus adapter (HBA) cards are zoned so that they are in the Green Zone with our storage subsystem. The two LUNs that have been defined on the storage subsystem, using LUN masking, are directly available to our Linux server.

6.6.1 Connecting the SVC to your SAN fabric


This section describes the basic steps that you take to introduce the SVC into your SAN environment. Although this section only summarizes these activities, you can introduce the SVC into your SAN environment without any downtime to any host or application that also uses your storage area network. If you have an SVC that is already connected, skip to 6.6.2, Preparing your SVC to virtualize disks on page 286. Connecting the SVC to your SAN fabric requires that you perform these tasks: 1. Assemble your SVC components (nodes, uninterruptible power supply unit and SSPC, cable the SVC correctly, power the SVC on, and verify that the SVC is visible on your SAN. We describe these tasks in much greater detail in Chapter 3, Planning and configuration on page 67. 2. Create and configure your SVC cluster. 3. Create these additional zones: An SVC node zone (our Black Zone in Figure 6-75 on page 286). A storage zone (our Red Zone). A host zone (our Blue Zone). For more detailed information about how to configure the zones correctly, see Chapter 3, Planning and configuration on page 67. Figure 6-75 on page 286 shows our environment.

Chapter 6. Data migration

285

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Zoning per Migration Scenarios LINUX Host

SAN

SVC I/O grp0 SVC SVC

IBM or OEM Storage Subsystem

IBM or OEM Storage Subsystem

Green Zone Red Zone Blue Zone Black Zone

By Pinocchio 12-09-2005

Figure 6-75 SAN environment with SVC attached

6.6.2 Preparing your SVC to virtualize disks


This section describes the preparation tasks that we performed before taking our Linux server offline. These activities are all nondisruptive. They do not affect your SAN fabric or your existing SVC configuration (if you already have a production SVC in place).

Creating a storage pool


When we move the two Linux LUNs to the SVC, we use them initially in image mode. Therefore, we need a storage pool to hold those disks. First, we need to create an empty storage pool for each of the disks, using the commands in Example 6-12. We name our storage pools Palau_Pool0 to hold our boot LUN. And, we name the second storage pool Palau_Pool1 to hold the data LUN.
Example 6-12 Create an empty storage pool

IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name Palau_Pool1 -ext 512 MDisk Group, id [2], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning easy_tier easy_tier_status 2 Palau_Pool1 online 0 0 0 512 0 0.00MB 0.00MB 0.00MB 0 0 auto inactive

286

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

3 Palau_Pool2 online 0 0.00MB 0.00MB inactive IBM_2145:ITSO-CLS1:admin>

0 0.00MB

0 0

512 0

0 auto

Creating your host definition


If you have prepared your zones correctly, the SVC can see the Linux servers HBA adapters on the fabric (our host only had one HBA). The svcinfo lshbaportcandidate command on the SVC lists all of the WWNs, which have not yet been allocated to a host, that the SVC can see on the SAN fabric. Example 6-13 shows the output of the nodes that it found on our SAN fabric. (If the port did not show up, it indicates a zone configuration problem.)
Example 6-13 Display HBA port candidates

IBM_2145:ITSO-CLS1:admin>svcinfo lshbaportcandidate id 210000E08B89C1CD 210000E08B054CAA 210000E08B0548BC 210000E08B0541BC 210000E08B89CCC2 IBM_2145:ITSO-CLS1:admin> If you do not know the WWN of your Linux server, you can look at which WWNs are currently configured on your storage subsystem for this host. Figure 6-76 shows our configured ports on an IBM DS4700 storage subsystem.

Figure 6-76 Display port WWNs

Chapter 6. Data migration

287

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

After verifying that the SVC can see our host (linux2), we create the host entry and assign the WWN to this entry. Example 6-14 shows these commands.
Example 6-14 Create the host entry

IBM_2145:ITSO-CLS1:admin>svctask mkhost -name Palau -hbawwpn 210000E08B054CAA:210000E08B89C1CD Host, id [0], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lshost Palau id 0 name Palau port_count 2 type generic mask 1111 iogrp_count 4 WWPN 210000E08B89C1CD node_logged_in_count 4 state inactive WWPN 210000E08B054CAA node_logged_in_count 4 state inactive IBM_2145:ITSO-CLS1:admin>

Verifying that we can see our storage subsystem


If we set up our zoning correctly, the SVC can see the storage subsystem with the svcinfo lscontroller command (Example 6-15).
Example 6-15 Discover storage controller

IBM_2145:ITSO-CLS1:admin>svcinfo lscontroller id controller_name ctrl_s/n vendor_id product_id_low product_id_high 0 DS4500 IBM 1742-900 1 DS4700 IBM 1814 FAStT IBM_2145:ITSO-CLS1:admin> You can rename the storage subsystem to a more meaningful name (if we had multiple storage subsystems that were connected to our SAN fabric, renaming them makes it considerably easier to identify them) with the svctask chcontroller -name command.

Getting the disk serial numbers


To help avoid the risk of creating the wrong volumes from all of the available, unmanaged MDisks (in case the SVC sees many available, unmanaged MDisks), we get the LUN serial numbers from our storage subsystem administration tool (Storage Manager). When we discover these MDisks, we confirm that we have the correct serial numbers before we create the image mode volumes. If you also use a DS4000 family storage subsystem, Storage Manager provides the LUN serial numbers. Right-click your logical drive and choose Properties. Our serial numbers are shown in the following figures. Figure 6-77 on page 289 shows the disk serial number SAN_Boot_palau.

288

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

Figure 6-77 Obtaining the disk serial number - SAN_Boot_palau

Figure 6-78 shows the disk serial number Palau_data.

Figure 6-78 Obtaining the disk serial number - Palau_data

Chapter 6. Data migration

289

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Before we move the LUNs to the SVC, we must configure the host multipath configuration for the SVC. Add the following entry to your multipath.conf file, as shown in Example 6-16, and add the content of Example 6-17 to the file.
Example 6-16 Edit the multipath.conf file

[root@Palau ~]# vi /etc/multipath.conf [root@Palau ~]# service multipathd stop Stopping multipathd daemon: [root@Palau ~]# service multipathd start Starting multipathd daemon: [root@Palau ~]#
Example 6-17 Data to add to the multipath.conf file

[ [

OK OK

] ]

# SVC device { vendor "IBM" product "2145CF8" path_grouping_policy group_by_serial } We are now ready to move the ownership of the disks to the SVC, discover them as MDisks, and give them back to the host as volumes.

6.6.3 Moving the LUNs to the SVC


In this step, we move the LUNs that are assigned to the Linux server and reassign them to the SVC. Our Linux server has two LUNs: one LUN is for our boot disk and operating system file systems, and the other LUN holds our application and data files. Moving both LUNs at one time requires shutting down the host. If we only wanted to move the LUN that holds our application and data files, we do not have to reboot the host. The only requirement is that we unmount the file system and vary off the volumegroup (VG) to ensure data integrity between the reassignment. The following steps are required, because we intend to move both LUNs at the same time: 1. Confirm that the multipath.conf file is configured for SVC. 2. Shut down the host. If you are only moving the LUNs that contain the application and data, follow this procedure instead: a. Stop the applications that are using the LUNs. b. Unmount those file systems with the umount MOUNT_POINT command. c. If the file systems are a logical volume manager (LVM) volume, deactivate that VG with the vgchange -a n VOLUMEGROUP_NAME. d. If possible, also unload your HBA driver using the rmmod DRIVER_MODULE command. This command removes the SCSI definitions from the kernel (we will reload this module and rediscover the disks later). It is possible to tell the Linux SCSI subsystem to rescan for new disks without requiring you to unload the HBA driver; however, we do not provide those details here.

290

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

3. Using Storage Manager (our storage subsystem management tool), we can unmap and unmask the disks from the Linux server and remap and remask the disks to the SVC. LUN IDs: Even though we are using boot from SAN, you can also map the boot disk with any LUN number to the SVC. It does not have to be 0 until later when we configure the mapping in the SVC to the host. 4. From the SVC, discover the new disks with the svctask detectmdisk command. The disks will be discovered and named mdiskN, where N is the next available MDisk number (starting from 0). Example 6-18 shows the commands that we used to discover our MDisks and to verify that we have the correct MDisks.
Example 6-18 Discover the new MDisks

IBM_2145:ITSO-CLS1:admin>svctask detectmdisk IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID tier 26 mdisk26 online unmanaged 12.0GB 0000000000000008 DS4700 600a0b800026b2820000428f48739bca00000000000000000000000000000000 generic_hdd 27 mdisk27 online unmanaged 5.0GB 0000000000000009 DS4700 600a0b800026b282000042f84873c7e100000000000000000000000000000000 generic_hdd IBM_2145:ITSO-CLS1:admin> Important: Match your discovered MDisk serial numbers (UID on the svcinfo lsmdisk task display) with the serial number that you recorded earlier (in Figure 6-77 and Figure 6-78 on page 289). 5. After we have verified that we have the correct MDisks, we rename them to avoid confusion in the future when we perform other MDisk-related tasks (Example 6-19).
Example 6-19 Rename the MDisks

IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name md_palauS mdisk26 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name md_palauD mdisk27 IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID tier 26 md_palauS online unmanaged 12.0GB 0000000000000008 DS4700 600a0b800026b2820000428f48739bca00000000000000000000000000000000 generic_hdd 27 md_palauD online unmanaged 5.0GB 0000000000000009 DS4700 600a0b800026b282000042f84873c7e100000000000000000000000000000000 generic_hdd IBM_2145:ITSO-CLS1:admin>

Chapter 6. Data migration

291

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

6. We create our image mode volumes with the svctask mkvdisk command and the -vtype image option (Example 6-20). This command virtualizes the disks in the exact same layout as though they were not virtualized.
Example 6-20 Create the image mode volumes

IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp Palau_Pool1 -iogrp 0 -vtype image -mdisk md_palauS -name palau_SANB Virtual Disk, id [29], successfully created IBM_2145:ITSO-CLS1:admin> IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp Palau_Pool2 -iogrp 0 -vtype image -mdisk md_palauD -name palau_Data Virtual Disk, id [30], successfully create IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID tier 26 md_palauS online image 2 Palau_Pool1 12.0GB 0000000000000008 DS4700 600a0b800026b2820000428f48739bca00000000000000000000000000000000 generic_hdd 27 md_palauD online image 3 Palau_Pool2 5.0GB 0000000000000009 DS4700 600a0b800026b282000042f84873c7e100000000000000000000000000000000 generic_hdd IBM_2145:ITSO-CLS1:admin> IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_wri te_state se_copy_count 29 palau_SANB 0 io_grp0 online 4 Palau_Pool1 12.0GB image 60050768018301BF280000000000002B 0 1 empty 0 30 palau_Data 0 io_grp0 online 4 Palau_Pool2 5.0GB image 60050768018301BF280000000000002C 0 1 empty 0 7. Map the new image mode volumes to the host (Example 6-21). Important: Make sure that you map the boot volume with SCSI ID 0 to your host. The host must be able to identify the boot volume during the boot process.
Example 6-21 Map the volumes to the host

IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Palau -scsi 0 palau_SANB Virtual Disk to Host map, id [0], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Palau -scsi 1 palau_Data Virtual Disk to Host map, id [1], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap Palau id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID 0 Palau 0 29 palau_SANB 210000E08B89C1CD 60050768018301BF280000000000002B 292
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

0 Palau 1 30 210000E08B89C1CD 60050768018301BF280000000000002C IBM_2145:ITSO-CLS1:admin>

palau_Data

FlashCopy: While the application is in a quiescent state, you can choose to use FlashCopy to copy the new image volumes onto other volumes. You do not need to wait until the FlashCopy process has completed before starting your application. 8. Power on your host server and enter your Fibre Channel (FC) HBA adapter BIOS before booting the operating system, and make sure that you change the boot configuration so that it points to the SVC. In our example, we performed the following steps on a QLogic HBA: a. Press Ctrl+Q to enter the HBA BIOS. b. Open Configuration Settings. c. Open Selectable Boot Settings. d. Change the entry from your storage subsystem to the SVC 2145 LUN with SCSI ID 0. e. Exit the menu and save your changes. 9. Boot up your Linux operating system. If you only moved the application LUN to the SVC and left your Linux server running, you only need to follow these steps to see the new volume: a. Load your HBA driver with the modprobe DRIVER_NAME command. If you did not (and cannot) unload your HBA driver, you can issue commands to the kernel to rescan the SCSI bus to see the new volumes (these details are beyond the scope of this book). b. Check your syslog, and verify that the kernel found the new volumes. On Red Hat Enterprise Linux, the syslog is stored in the /var/log/messages directory. c. If your application and data are on an LVM volume, rediscover the VG and then run the vgchange -a y VOLUME_GROUP command to activate the VG. 10.Mount your file systems with the mount /MOUNT_POINT command (Example 6-22). The df output shows us that all of disks are available again.
Example 6-22 Mount data disk

[root@Palau data]# mount /dev/dm-2 /data [root@Palau data]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 10093752 1938056 7634688 21% / /dev/sda1 101086 12054 83813 13% /boot tmpfs 1033496 0 1033496 0% /dev/shm /dev/dm-2 5160576 158160 4740272 4% /data [root@Palau data]# 11.You are now ready to start your application.

6.6.4 Migrating the image mode volumes to managed MDisks


While the Linux server is still running, and while our file systems are in use, we migrate the image mode volumes onto striped volumes, with the extents being spread over the other three MDisks. In our example, the three new LUNs are located on an DS4500 storage subsystem, so we will also move to another storage subsystem in this example.

Chapter 6. Data migration

293

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Preparing MDisks for striped mode volumes


From our second storage subsystem, we have performed these tasks: Created and allocated three new LUNs to the SVC Discovered them as MDisks Renamed these LUNs to more meaningful names Created a new storage pool Placed all of these MDisks into this storage pool You can see the output of our commands in Example 6-23.
Example 6-23 Create a new storage pool

IBM_2145:ITSO-CLS1:admin>svctask detectmdisk IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MD_palauVD -ext 512 MDisk Group, id [8], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID tier 26 md_palauS online image 2 Palau_Pool1 12.0GB 0000000000000008 DS4700 600a0b800026b2820000428f48739bca00000000000000000000000000000000 generic_hdd 27 md_palauD online image 3 Palau_Pool2 5.0GB 0000000000000009 DS4700 600a0b800026b282000042f84873c7e100000000000000000000000000000000 generic_hdd 28 mdisk28 online unmanaged 8.0GB 0000000000000010 DS4500 600a0b8000174233000000b9487778ab00000000000000000000000000000000 generic_hdd 29 mdisk29 online unmanaged 8.0GB 0000000000000011 DS4500 600a0b80001744310000010f48776bae00000000000000000000000000000000 generic_hdd 30 mdisk30 online unmanaged 8.0GB 0000000000000012 DS4500 600a0b8000174233000000bb487778d900000000000000000000000000000000 generic_hdd IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name palau-md1 mdisk28 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name palau-md2 mdisk29 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name palau-md3 mdisk30 IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk palau-md1 MD_palauVD IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk palau-md2 MD_palauVD IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk palau-md3 MD_palauVD IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID tier 26 md_palauS online image 2 Palau_Pool1 12.0GB 0000000000000008 DS4700 600a0b800026b2820000428f48739bca00000000000000000000000000000000 generic_hdd 27 md_palauD online image 3 Palau_Pool2 5.0GB 0000000000000009 DS4700 600a0b800026b282000042f84873c7e100000000000000000000000000000000 generic_hdd 28 palau-md1 online unmanaged 8 MD_palauVD 8.0GB 0000000000000010 DS4500 600a0b8000174233000000b9487778ab00000000000000000000000000000000 generic_hdd

294

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

29 palau-md2 online unmanaged 8 MD_palauVD 8.0GB 0000000000000011 DS4500 600a0b80001744310000010f48776bae00000000000000000000000000000000 generic_hdd 30 palau-md3 online unmanaged 8 MD_palauVD 8.0GB 0000000000000012 DS4500 600a0b8000174233000000bb487778d900000000000000000000000000000000 generic_hdd IBM_2145:ITSO-CLS1:admin>

Migrating the volumes


We are now ready to migrate the image mode volumes onto striped volumes in the MD_palauVD storage pool with the svctask migratevdisk command (Example 6-24). While the migration is running, our Linux server is still running. To check the overall progress of the migration, we use the svcinfo lsmigrate command as shown in Example 6-24. Listing the storage pool with the svcinfo lsmdiskgrp command shows that the free capacity on the old storage pools is slowly increasing as those extents are moved to the new storage pool.
Example 6-24 Migrating image mode volumes to striped volumes

IBM_2145:ITSO-CLS1:admin>svctask migratevdisk -vdisk palau_SANB -mdiskgrp MD_palauVD IBM_2145:ITSO-CLS1:admin>svctask migratevdisk -vdisk palau_Data -mdiskgrp MD_palauVD IBM_2145:ITSO-CLS1:admin>svcinfo lsmigrate migrate_type MDisk_Group_Migration progress 25 migrate_source_vdisk_index 29 migrate_target_mdisk_grp 8 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type MDisk_Group_Migration progress 70 migrate_source_vdisk_index 30 migrate_target_mdisk_grp 8 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS1:admin> After this task has completed, Example 6-25 shows that the volumes are now spread over three MDisks.
Example 6-25 Migration complete

IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp MD_palauVD id 8 name MD_palauVD status online mdisk_count 3 vdisk_count 2 capacity 24.0GB extent_size 512 free_capacity 7.0GB virtual_capacity 17.00GB used_capacity 17.00GB
Chapter 6. Data migration

295

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

real_capacity 17.00GB overallocation 70 warning 0 IBM_2145:ITSO-CLS1:admin>svcinfo lsvdiskmember palau_SANB id 28 29 30 IBM_2145:ITSO-CLS1:admin>svcinfo lsvdiskmember palau_Data id 28 29 30 IBM_2145:ITSO-CLS1:admin> Our migration to striped volumes on another storage subsystem (DS4500) is now complete. The original MDisks (palau-md1, palau-md2, and palau-md3) can now be removed from the SVC, and these LUNs can be removed from the storage subsystem. If these LUNs are the last LUNs that were used on our DS4700 storage subsystem, we can remove it from our SAN fabric.

6.6.5 Preparing to migrate from the SVC


Before we move the Linux servers LUNs from being accessed by the SVC as volumes to being directly accessed from the storage subsystem, we must convert the volumes into image mode volumes. You might want to perform this activity for any one of these reasons: You purchased a new storage subsystem, and you were using SVC as a tool to migrate from your old storage subsystem to this new storage subsystem. You used the SVC to FlashCopy or Metro Mirror a volume onto another volume, and you no longer need that host connected to the SVC. You want to move a host and its data that is currently connected to the SVC to a site where there is no SVC. Changes to your environment no longer require this host to use the SVC. There are also other preparation activities that we can perform before we have to shut down the host and reconfigure the LUN masking and mapping. This section covers those activities. If you are moving the data to a new storage subsystem, it is assumed that the storage subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches. Your environment must look similar to our environment, which is shown in Figure 6-79 on page 297.

296

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

Zoning for migration scenarios LINUX Host

SAN

SVC I/O grp0 SVC SVC

IBM or OEM Storage Subsystem

IBM or OEM Storage Subsystem

Green Zone Red Zone Blue Zone Black Zone

Figure 6-79 Environment with SVC

Making fabric zone changes


The first step is to set up the SAN configuration so that all of the zones are created. You must add the new storage subsystem to the Red Zone so that the SVC can talk to it directly. We also need a Green Zone for our host to use when we are ready for it to directly access the disk after it has been removed from the SVC. It is assumed that you have created the necessary zones, and after your zone configuration is set up correctly, the SVC sees the new storage subsystem controller using the svcinfo lscontroller command as in Example 6-26.
Example 6-26 Check controller name

IBM_2145:ITSO-CLS1:admin>svcinfo lscontroller id controller_name ctrl_s/n vendor_id product_id_high 0 controller0 IBM FAStT IBM_2145:ITSO-CLS1:admin>

product_id_low 1814

It is also a good idea to rename the new storage subsystems controller to a more useful name, which can be done with the svctask chcontroller -name command as in Example 6-27 on page 298.

Chapter 6. Data migration

297

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Example 6-27 Rename controller

IBM_2145:ITSO-CLS1:admin>svctask chcontroller -name ITSO-4700 0 IBM_2145:ITSO-CLS1:admin> Also verify that controller name was changed as you wanted, as shown in Example 6-28.
Example 6-28 Recheck controller name

IBM_2145:ITSO-CLS1:admin>svcinfo lscontroller id controller_name ctrl_s/n vendor_id product_id_high 0 ITSO-4700 IBM FAStT IBM_2145:ITSO-CLS1:admin>

product_id_low 1814

Creating new LUNs


On our storage subsystem, we created two LUNs and masked the LUNs so that the SVC can see them. Eventually, we will give these two LUNs directly to the host, removing the volumes that the host currently has. To check that the SVC can use these two LUNs, issue the svctask detectmdisk command, as shown in Example 6-29.
Example 6-29 Discover the new MDisks

IBM_2145:ITSO-CLS1:admin>svctask detectmdisk IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 0 mdisk0 online managed 600a0b800026b282000042f84873c7e100000000000000000000000000000000 28 palau-md1 online managed 8 MD_palauVD 8.0GB 0000000000000010 DS4500 600a0b8000174233000000b9487778ab00000000000000000000000000000000 29 palau-md2 online managed 8 MD_palauVD 8.0GB 0000000000000011 DS4500 600a0b80001744310000010f48776bae00000000000000000000000000000000 30 palau-md3 online managed 8 MD_palauVD 8.0GB 0000000000000012 DS4500 600a0b8000174233000000bb487778d900000000000000000000000000000000 31 mdisk31 online unmanaged 6.0GB 0000000000000013 DS4500 600a0b8000174233000000bd4877890f00000000000000000000000000000000 32 mdisk32 online unmanaged 12.5GB 0000000000000014 DS4500 600a0b80001744310000011048777bda00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin> Even though the MDisks will not stay in the SVC for long, we suggest that you rename them to more meaningful names so that they do not get confused with other MDisks that are used by other activities. Also, we create the storage pools to hold our new MDisks, which is shown in Example 6-30 on page 299.

298

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

Example 6-30 Rename the MDisks

IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name mdpalau_ivd mdisk32 IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_Palauivd -ext 512 MDisk Group, id [9], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_Palauivd -ext 512 CMMVC5758E Object name already exists. IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning easy_tier easy_tier_status 8 MD_palauVD online 3 2 24.0GB 512 7.0GB 17.00GB 17.00GB 17.00GB 70 0 auto inactive 9 MDG_Palauivd online 0 0 0 512 0 0.00MB 0.00MB 0.00MB 0 0 auto inactive IBM_2145:ITSO-CLS1:admin>

Our SVC environment is now ready for the volume migration to image mode volumes.

6.6.6 Migrating the volumes to image mode volumes


While our Linux server is still running, we migrate the managed volumes onto the new MDisks using image mode volumes. The command to perform this action is the svctask migratetoimage command, which is shown in Example 6-31.
Example 6-31 Migrate the volumes to image mode volumes

IBM_2145:ITSO-CLS1:admin>svctask migratetoimage -vdisk palau_SANB -mdisk mdpalau_ivd -mdiskgrp MD_palauVD IBM_2145:ITSO-CLS1:admin>svctask migratetoimage -vdisk palau_Data -mdisk mdpalau_ivd1 -mdiskgrp MD_palauVD IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 28 palau-md1 online managed 8 MD_palauVD 8.0GB 0000000000000010 DS4500 600a0b8000174233000000b9487778ab00000000000000000000000000000000 29 palau-md2 online managed 8 MD_palauVD 8.0GB 0000000000000011 DS4500 600a0b80001744310000010f48776bae00000000000000000000000000000000 30 palau-md3 online managed 8 MD_palauVD 8.0GB 0000000000000012 DS4500 600a0b8000174233000000bb487778d900000000000000000000000000000000 31 mdpalau_ivd1 online image 8 MD_palauVD 6.0GB 0000000000000013 DS4500 600a0b8000174233000000bd4877890f00000000000000000000000000000000 32 mdpalau_ivd online image 8 MD_palauVD 12.5GB 0000000000000014 DS4500 600a0b80001744310000011048777bda00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin>svcinfo lsmigrate migrate_type Migrate_to_Image progress 4

Chapter 6. Data migration

299

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

migrate_source_vdisk_index 29 migrate_target_mdisk_index 32 migrate_target_mdisk_grp 8 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type Migrate_to_Image progress 30 migrate_source_vdisk_index 30 migrate_target_mdisk_index 31 migrate_target_mdisk_grp 8 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS1:admin> During the migration, our Linux server is unaware that its data is being physically moved between storage subsystems. After the migration has completed, the image mode volumes are ready to be removed from the Linux server, and the real LUNs can be mapped and masked directly to the host by using the storage subsystems tool.

6.6.7 Removing the LUNs from the SVC


The next step requires downtime on the Linux server, because we will remap and remask the disks so that the host sees them directly through the Green Zone, as shown in Figure 6-79 on page 297. Our Linux server has two LUNs: one LUN is our boot disk and operating system file systems, and the other LUN holds our application and data files. Moving both LUNs at one time requires shutting down the host. If we only want to move the LUN that holds our application and data files, we can move that LUN without rebooting the host. The only requirement is that we unmount the file system and vary off the VG to ensure data integrity during the reassignment. Before you start: Moving LUNs to another storage subsystem might need an additional entry in the multipath.conf file. Check with the storage subsystem vendor to see which content you must add to the file. You might be able to install and modify the file ahead of time. When you intend to move both LUNs at the same time, you must use these required steps: 1. Confirm that your operating system is configured for the new storage. 2. Shut down the host. If you are only moving the LUNs that contain the application and data, you can follow this procedure instead: a. Stop the applications that are using the LUNs. b. Unmount those file systems with the umount MOUNT_POINT command. c. If the file systems are an LVM volume, deactivate that VG with the vgchange -a n VOLUMEGROUP_NAME command. d. If you can, unload your HBA driver using the rmmod DRIVER_MODULE command. This command removes the SCSI definitions from the kernel (we will reload this module and

300

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

rediscover the disks later). It is possible to tell the Linux SCSI subsystem to rescan for new disks without requiring you to unload the HBA driver; however, we do not provide these details here. 3. Remove the volumes from the host by using the svctask rmvdiskhostmap command (Example 6-32). To double-check that you have removed the volumes, use the svcinfo lshostvdiskmap command, which shows that these disks are no longer mapped to the Linux server.
Example 6-32 Remove the volumes from the host

IBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Palau palau_SANB IBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Palau palau_Data IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap Palau IBM_2145:ITSO-CLS1:admin>

4. Remove the volumes from the SVC by using the svctask rmvdisk command. This step makes them unmanaged, as seen in Example 6-33. Cached data: When you run the svctask rmvdisk command, the SVC will first double-check that there is no outstanding dirty cached data for the volume that is being removed. If there is still uncommitted cached data, the command fails with the following error message: CMMVC6212E The command failed because data in the cache has not been committed to disk You will have to wait for this cached data to be committed to the underlying storage subsystem before you can remove the volume. The SVC will automatically destage uncommitted cached data two minutes after the last write activity for the volume. How much data there is to destage, and how busy the I/O subsystem is, determine how long this command takes to complete. You can check if the volume has uncommitted data in the cache by using the command svcinfo lsvdisk <VDISKNAME> and checking the fast_write_state attribute. This attribute has the following meanings: empty not_empty corrupt No modified data exists in the cache. Modified data might exist in the cache. Modified data might have existed in the cache, but any data has been lost.

Example 6-33 Remove the volumes from the SVC

IBM_2145:ITSO-CLS1:admin>svctask rmvdisk palau_SANB IBM_2145:ITSO-CLS1:admin>svctask rmvdisk palau_Data IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 31 mdpalau_ivd1 online unmanaged 6.0GB 0000000000000013 DS4500 600a0b8000174233000000bd4877890f00000000000000000000000000000000 32 mdpalau_ivd online unmanaged 12.5GB 0000000000000014 DS4500 600a0b80001744310000011048777bda00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin>

Chapter 6. Data migration

301

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

5. Using Storage Manager (our storage subsystem management tool), unmap and unmask the disks from the SVC back to the Linux server. Important: If one of the disks is used to boot your Linux server, you must make sure that it is presented back to the host as SCSI ID 0 so that the FC adapter BIOS finds that disk during its initialization. 6. Power on your host server and enter your FC HBA BIOS before booting the OS. Make sure that you change the boot configuration so that it points to the SVC. In our example, we performed the following steps on a QLogic HBA: a. Pressed Ctrl+Q to enter the HBA BIOS. b. Opened Configuration Settings. c. Opened Selectable Boot Settings. d. Changed the entry from the SVC to your storage subsystem LUN with SCSI ID 0. e. Exited the menu and saved the changes. Important: This is the last step that you can perform and still safely back out everything that you have done so far. Up to this point, you can reverse all of the actions that you have performed so far to get the server back online without data loss: Remap and remask the LUNs back to the SVC. Run the svctask detectmdisk command to rediscover the MDisks. Recreate the volumes with the svctask mkvdisk command. Remap the volumes back to the server with the svctask mkvdiskhostmap command. After you start the next step, you might not be able to turn back without the risk of data loss. 7. We now restart the Linux server. If all of the zoning and LUN masking and mapping were done successfully, the Linux server boots as though nothing has happened. However, if you only moved the application LUN to the SVC and left your Linux server running, you must follow these steps to see the new volume: a. Load your HBA driver with the modprobe DRIVER_NAME command. If you did not (and cannot) unload your HBA driver, you can issue commands to the kernel to rescan the SCSI bus to see the new volumes (describing these details is beyond the scope of this book). b. Check your syslog and verify that the kernel found the new volumes. On Red Hat Enterprise Linux, the syslog is stored in the /var/log/messages directory. c. If your application and data are on an LVM volume, run the vgscan command to rediscover the VG, and then, run the vgchange -a y VOLUME_GROUP command to activate the VG. 8. Mount your file systems with the mount /MOUNT_POINT command (Example 6-34 on page 303). The df output shows that all of the disks are available again.

302

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

Example 6-34 File system after migration

[root@Palau ~]# mount /dev/dm-2 /data [root@Palau ~]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 10093752 1938124 7634620 21% / /dev/sda1 101086 12054 83813 13% /boot tmpfs 1033496 0 1033496 0% /dev/shm /dev/dm-2 5160576 158160 4740272 4% /data [root@Palau ~]# 9. You are ready to start your application. 10.Finally, to make sure that the MDisks are removed from the SVC, run the svctask detectmdisk command. The MDisks will first be discovered as offline, and then they will automatically be removed when the SVC determines that there are no volumes associated with these MDisks.

6.7 Migrating ESX SAN disks to SVC disks


In this section, we move the two LUNs from our VMware ESX server to the SVC. The ESX operating system is installed locally on the host, but the two SAN disks are connected, and the virtual machines are stored there. We then manage those LUNs with the SVC, move them between other managed disks, and finally move them back to image mode disks so that those LUNs can then be masked and mapped back to the VMware ESX server directly. This example can help you perform any one of the following activities in your environment: Move your ESX servers data LUNs (that are your VMware vmfs file systems where you might have your virtual machines stored), which are directly accessed from a storage subsystem, to virtualized disks under the control of the SVC. Move LUNs between storage subsystems while your VMware virtual machines are still running. You can perform this activity to move the data onto LUNs that are more appropriate for the type of data that is stored on those LUNs, taking into account availability, performance, and redundancy. We describe this step in 6.7.4, Migrating the image mode volumes on page 312. Move your VMware ESX servers LUNs back to image mode volumes so that they can be remapped and remasked directly back to the server. This step starts in 6.7.5, Preparing to migrate from the SVC on page 315. You can use these activities individually, or together, to migrate your VMware ESX servers LUNs from one storage subsystem to another storage subsystem, using the SVC as your migration tool. If you do not use all three activities, you can introduce the SVC in your environment, or move the data between your storage subsystems. The only downtime that is required for these activities is the time that it takes you to remask and remap the LUNs between the storage subsystems and your SVC. In Figure 6-80 on page 304, we show our starting SAN environment.

Chapter 6. Data migration

303

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 6-80 ESX environment before migration

Figure 6-80 shows our ESX server connected to the SAN infrastructure. It has two LUNs that are masked directly to it from our storage subsystem. Our ESX server represents a typical SAN environment with a host directly using LUNs that were created on a SAN storage subsystem, as shown in Figure 6-80: The ESX Servers HBA cards are zoned so that they are in the Green Zone with our storage subsystem. The two LUNs that have been defined on the storage subsystem and that use LUN masking are directly available to our ESX server.

6.7.1 Connecting the SVC to your SAN fabric


This section describes the steps needed to introduce the SVC into your SAN environment. Although we only summarize these activities here, you can introduce the SVC into your SAN environment without any downtime to any host or application that also uses your storage area network. If you have an SVC already connected, skip to the instructions that are given in 6.7.2, Preparing your SVC to virtualize disks on page 306.

304

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

Attention: Be extremely careful when connecting the SVC to your storage area network, because this requires you to connect cables to your SAN switches and to alter your switch zone configuration. Performing these activities incorrectly can render your SAN inoperable, so make sure that you fully understand the effect of your actions. You must perform these tasks to connect the SVC to your SAN fabric: Assemble your SVC components (nodes, uninterruptible power supply unit, SSPC), cable the SVC correctly, power the SVC on, and verify that the SVC is visible on your SAN area network. Create and configure your SVC cluster. Create these additional zones: An SVC node zone (the Black Zone in our picture on Example 6-57 on page 327). A storage zone (our Red Zone). A host zone (our Blue Zone). For more detailed information about how to configure the zones in the correct way, see Chapter 3, Planning and configuration on page 67. Figure 6-81 shows the environment that we set up.

Figure 6-81 SAN environment with SVC attached

Chapter 6. Data migration

305

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

6.7.2 Preparing your SVC to virtualize disks


This section describes the preparatory tasks that we perform before taking our ESX server or virtual machines offline. These tasks are all nondisruptive activities, which do not affect your SAN fabric or your existing SVC configuration (if you already have a production SVC in place).

Creating a storage pool


When we move the two ESX LUNs to the SVC, they are first used in image mode, and therefore, we need a storage pool to hold those disks. We create an empty storage pool for these disks by using the command shown in Example 6-35. Our MDG_Nile_VM storage pool holds the boot LUN and our data LUN.
Example 6-35 Creating an empty storage pool

IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_Nile_VM -ext 512 MDisk Group, id [3], successfully created

Creating the host definition


If you prepared the zones correctly, the SVC can see the ESX servers HBA adapters on the fabric (our host only had one HBA). First, we get the WWN for our ESX servers HBA because we have many hosts connected to our SAN fabric and in the Blue Zone. We want to make sure that we have the correct WWN to reduce our ESX servers downtime. Log in to your VMware management console as root, then navigate to Configuration and select Storage Adapter. The Storage Adapters are shown on the right side of this window and display all of the necessary information. Figure 6-82 shows our WWNs, which are 210000E08B89B8C0 and 210000E08B892BCD.

Figure 6-82 Obtain your WWN using the VMware Management Console

Use the svcinfo lshbaportcandidate command on the SVC to list all of the WWNs, which have not yet been allocated to a host, that the SVC can see on the SAN fabric. Example 6-36 on page 307 shows the output of the nodes that it found on our SAN fabric. (If the port did not show up, it indicates a zone configuration problem.) 306

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

Example 6-36 Add the host to the SVC

IBM_2145:ITSO-CLS1:admin>svcinfo lshbaportcandidate id 210000E08B89B8C0 210000E08B892BCD 210000E08B0548BC 210000E08B0541BC 210000E08B89CCC2 IBM_2145:ITSO-CLS1:admin> After verifying that the SVC can see our host, we create the host entry and assign the WWN to this entry. Example 6-37 shows these commands.
Example 6-37 Create the host entry

IBM_2145:ITSO-CLS1:admin>svctask mkhost -name Nile -hbawwpn 210000E08B89B8C0:210000E08B892BCD Host, id [1], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lshost Nile id 1 name Nile port_count 2 type generic mask 1111 iogrp_count 4 WWPN 210000E08B892BCD node_logged_in_count 4 state active WWPN 210000E08B89B8C0 node_logged_in_count 4 state active IBM_2145:ITSO-CLS1:admin>

Verifying that you can see your storage subsystem


If our zoning has been performed correctly, the SVC can also see the storage subsystem with the svcinfo lscontroller command (Example 6-38).
Example 6-38 Available storage controllers

IBM_2145:ITSO-CLS1:admin>svcinfo lscontroller id controller_name ctrl_s/n product_id_low product_id_high 0 DS4500 1742-900 1 DS4700 1814 FAStT

vendor_id IBM IBM

Getting your disk serial numbers


To help avoid the risk of creating the wrong volumes from all of the available unmanaged MDisks (in case the SVC sees many available unmanaged MDisks), we get the LUN serial numbers from our storage subsystem administration tool (Storage Manager). When we discover these MDisks, we confirm that we have the correct serial numbers before we create the image mode volumes.
Chapter 6. Data migration

307

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

If you also use a DS4000 family storage subsystem, Storage Manager provides the LUN serial numbers. Right-click your logical drive, and choose Properties. The following figures show our serial numbers. Figure 6-83 shows disk serial number VM_W2k3.

Figure 6-83 Obtaining the disk serial number - VM_W2k3

Figure 6-84 shows disk serial number VM_SLES

Figure 6-84 Obtaining the disk serial number - VM_SLES

308

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

We are ready to move the ownership of the disks to the SVC, discover them as MDisks, and give them back to the host as volumes.

6.7.3 Moving the LUNs to the SVC


In this step, we move the LUNs that are assigned to the ESX server and reassign them to the SVC. Our ESX server has two LUNs, as shown in Figure 6-85.

Figure 6-85 VMWare LUNs

The virtual machines are located on these LUNs. Therefore, to move these LUNs under the control of the SVC, we do not need to reboot the entire ESX server, but we do have to stop and suspend all VMware guests that are using these LUNs.

Moving VMware guest LUNs


To move the VMware LUNs to the SVC, perform the following steps: 1. Using Storage Manager, we have identified the LUN number that has been presented to the ESX Server. Record which LUN had which LUN number; see Figure 6-86.

Figure 6-86 Identify LUN numbers in IBM DS4000 Storage Manager

2. Identify all of the VMware guests that are using this LUN and shut them down. One way to identify them is to highlight the virtual machine and open the Summary Tab. The datapool that is used is displayed under Datastore. Figure 6-87 on page 310 shows a Linux virtual machine using the datastore named SLES_Costa_Rica.

Chapter 6. Data migration

309

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 6-87 Identify the LUNs that are used by virtual machines

3. If you have several ESX hosts, also check the other ESX hosts to make sure that there is no guest operating system that is running and using this datastore. 4. Repeat steps 1 to 3 for every datastore that you want to migrate. 5. After the guests are suspended, we use Storage Manager (our storage subsystem management tool) to unmap and unmask the disks from the ESX server and to remap and to remask the disks to the SVC. 6. From the SVC, discover the new disks with the svctask detectmdisk command. The disks will be discovered and named as mdiskN, where N is the next available MDisk number (starting from 0). Example 6-39 shows the commands that we used to discover our MDisks and to verify that we have the correct MDisks.
Example 6-39 Discover the new MDisks

IBM_2145:ITSO-CLS1:admin>svctask detectmdisk IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 21 mdisk21 online unmanaged 60.0GB 0000000000000008 DS4700 600a0b800026b282000041ca486d14a500000000000000000000000000000000 22 mdisk22 online unmanaged 70.0GB 0000000000000009 DS4700 600a0b80002904de0000447a486d14cd00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin>

310

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

Important: Match your discovered MDisk serial numbers (UID on the svcinfo lsmdisk command task display) with the serial number that you obtained earlier (in Figure 6-83 and Figure 6-84 on page 308). 7. After we have verified that we have the correct MDisks, we rename them to avoid confusion in the future when we perform other MDisk-related tasks; see Example 6-40.
Example 6-40 Rename the MDisks

IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name ESX_W2k3 mdisk22 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name ESX_SLES mdisk21 IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk 21 ESX_SLES online unmanaged 60.0GB 0000000000000008 DS4700 600a0b800026b282000041ca486d14a500000000000000000000000000000000 22 ESX_W2k3 online unmanaged 70.0GB 0000000000000009 DS4700 600a0b80002904de0000447a486d14cd00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin> 8. We create our image mode volumes with the svctask mkvdisk command; see Example 6-41. Using the parameter -vtype image ensures that it will create image mode volumes, which means that the virtualized disks will have the exact same layout as though they were not virtualized.
Example 6-41 Create the image mode volumes

IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp MDG_Nile_VM -iogrp 0 -vtype image -mdisk ESX_W2k3 -name ESX_W2k3_IVD Virtual Disk, id [29], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp MDG_Nile_VM -iogrp 0 -vtype image -mdisk ESX_SLES -name ESX_SLES_IVD Virtual Disk, id [30], successfully created IBM_2145:ITSO-CLS1:admin> 9. Finally, we can map the new image mode volumes to the host. Use the same SCSI LUN IDs as on the storage subsystem for the mapping; see Example 6-42.
Example 6-42 Map the volumes to the host

IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Nile -scsi 0 ESX_SLES_IVD Virtual Disk to Host map, id [0], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Nile -scsi 1 ESX_W2k3_IVD Virtual Disk to Host map, id [1], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID 1 Nile 0 30 ESX_SLES_IVD 210000E08B892BCD 60050768018301BF280000000000002A 1 Nile 1 29 ESX_W2k3_IVD 210000E08B892BCD 60050768018301BF2800000000000029

Chapter 6. Data migration

311

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

10.Using the VMware management console, rescan to discover the new volume. Open the configuration tab, select Storage Adapters, and click Rescan. During the rescan you can receive geometry errors when ESX discovers that the old disk has disappeared. Your volume will appear with the new vmhba devices. 11.We are ready to restart the VMware guests again. At this point, you have migrated the VMware LUNs successfully to the SVC.

6.7.4 Migrating the image mode volumes


While the VMware server and its virtual machines are still running, we migrate the image mode volumes onto striped volumes, with the extents being spread over three other MDisks.

Preparing MDisks for striped mode volumes


In this example, we migrate the image mode volumes to volumes and move the data to another storage subsystem in one step.

Adding a new storage subsystem to SVC


If you are moving the data to a new storage subsystem, it is assumed that this storage subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches. Your environment must look similar to our environment, which is shown in Figure 6-88.

Figure 6-88 ESX SVC SAN environment

Make fabric zone changes


The first step is to set up the SAN configuration so that all of the zones are created. Add the new storage subsystem to the Red Zone so that the SVC can talk to it directly.

312

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

We also need a Green Zone for our host to use when we are ready for it to directly access the disk, after it has been removed from the SVC. We assume that you have created the necessary zones. In our environment, we have performed these tasks: Created three LUNs on another storage subsystem and mapped it to the SVC Discovered them as MDisks Created a new storage pool Renamed these LUNs to more meaningful names Put all these MDisks into this storage pool You can see the output of our commands in Example 6-43. Example 6-43 Create a new storage pool IBM_2145:ITSO-CLS1:admin>svctask detectmdisk IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 21 ESX_SLES online image 3 MDG_Nile_VM 60.0GB 0000000000000008 DS4700 600a0b800026b282000041ca486d14a500000000000000000000000000000000 22 ESX_W2k3 online image 3 MDG_Nile_VM 70.0GB 0000000000000009 DS4700 600a0b80002904de0000447a486d14cd00000000000000000000000000000000 23 mdisk23 online unmanaged 55.0GB 000000000000000D DS4500 600a0b8000174233000000b4486d250300000000000000000000000000000000 24 mdisk24 online unmanaged 55.0GB 000000000000000E DS4500 600a0b800017443100000108486d182c00000000000000000000000000000000 25 mdisk25 online unmanaged 55.0GB 000000000000000F DS4500 600a0b8000174233000000b5486d255b00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_ESX_VD -ext 512 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name IBMESX-MD1 mdisk23 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name IBMESX-MD2 mdisk24 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name IBMESX-MD3 mdisk25 IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk IBMESX-MD1 MDG_ESX_VD IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk IBMESX-MD2 MDG_ESX_VD IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk IBMESX-MD3 MDG_ESX_VD IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 21 ESX_SLES online image 3 MDG_Nile_VM 60.0GB 0000000000000008 DS4700 600a0b800026b282000041ca486d14a500000000000000000000000000000000 22 ESX_W2k3 online image 3 MDG_Nile_VM 70.0GB 0000000000000009 DS4700 600a0b80002904de0000447a486d14cd00000000000000000000000000000000 23 IBMESX-MD1 online managed 4 MDG_ESX_VD 55.0GB 000000000000000D DS4500 600a0b8000174233000000b4486d250300000000000000000000000000000000
Chapter 6. Data migration

313

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

24 IBMESX-MD2 online managed MDG_ESX_VD 55.0GB 000000000000000E DS4500 600a0b800017443100000108486d182c00000000000000000000000000000000 25 IBMESX-MD3 online managed MDG_ESX_VD 55.0GB 000000000000000F DS4500 600a0b8000174233000000b5486d255b00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin>

Migrating the volumes


At this point we are ready to migrate the image mode volumes onto striped volumes in the new storage pool (MDG_ESX_VD) with the svctask migratevdisk command (Example 6-44). While the migration is running, our VMware ESX server and our VMware guests will remain running. To check the overall progress of the migration, we use the svcinfo lsmigrate command as shown in Example 6-44. Listing the storage pool with the svcinfo lsmdiskgrp command shows that the free capacity on the old storage pool is slowly increasing as those extents are moved to the new storage pool.
Example 6-44 Migrating image mode volumes to striped volumes

IBM_2145:ITSO-CLS1:admin>svctask migratevdisk -vdisk ESX_SLES_IVD -mdiskgrp MDG_ESX_VD IBM_2145:ITSO-CLS1:admin>svctask migratevdisk -vdisk ESX_W2k3_IVD -mdiskgrp MDG_ESX_VD IBM_2145:ITSO-CLS1:admin>svcinfo lsmigrate migrate_type MDisk_Group_Migration progress 0 migrate_source_vdisk_index 30 migrate_target_mdisk_grp 4 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type MDisk_Group_Migration progress 0 migrate_source_vdisk_index 29 migrate_target_mdisk_grp 4 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS1:admin>svcinfo lsmigrate migrate_type MDisk_Group_Migration progress 1 migrate_source_vdisk_index 30 migrate_target_mdisk_grp 4 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type MDisk_Group_Migration progress 0 migrate_source_vdisk_index 29 migrate_target_mdisk_grp 4 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning

314

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

3 MDG_Nile_VM 130.0GB 512 130.00GB 100 4 MDG_ESX_VD 165.0GB 512 0.00MB 0 IBM_2145:ITSO-CLS1:admin>

online 1.0GB 0 online 35.0GB 0

2 130.00GB 3 0.00MB

2 130.00GB 0 0.00MB

If you compare the svcinfo lsmdiskgrp output after the migration, as shown in Example 6-45, you can see that all of the virtual capacity has now been moved from the old storage pool (MDG_Nile_VM) to the new storage pool (MDG_ESX_VD). The mdisk_count column shows that the capacity is now spread over three MDisks.
Example 6-45 List MDisk group

IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp id name status capacity extent_size free_capacity real_capacity overallocation warning 3 MDG_Nile_VM online 130.0GB 512 130.0GB 0.00MB 0 0 4 MDG_ESX_VD online 165.0GB 512 35.0GB 130.00GB 78 0 IBM_2145:ITSO-CLS1:admin>

mdisk_count vdisk_count virtual_capacity used_capacity 2 0.00MB 3 130.00GB 0 0.00MB 2 130.00GB

The migration to the SVC is complete. You can remove the original MDisks from the SVC and remove these LUNs from the storage subsystem. If these LUNs are the last LUNs that were used on our storage subsystem, we can remove it from our SAN fabric.

6.7.5 Preparing to migrate from the SVC


Before we move the ESX servers LUNs from being accessible by the SVC as volumes to becoming directly accessed from the storage subsystem, we need to convert the volumes into image mode volumes. You might want to perform this activity for any one of these reasons: You purchased a new storage subsystem, and you were using SVC as a tool to migrate from your old storage subsystem to this new storage subsystem. You used SVC to FlashCopy or Metro Mirror a volume onto another volume, and you no longer need that host connected to the SVC. You want move a host and its data that currently is connected to the SVC to a site where there is no SVC. Changes to your environment no longer require this host to use the SVC.

Chapter 6. Data migration

315

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

There are also other preparatory activities that we can perform before we shut down the host and reconfigure the LUN masking and mapping. This section describes those activities. In our example, we will move volumes that are located on a DS4500 to image mode volumes that are located on a DS4700. If you are moving the data to a new storage subsystem, it is assumed that this storage subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches. Your environment must look similar to our environment, as described in Adding a new storage subsystem to SVC on page 312 and Make fabric zone changes on page 312.

Creating new LUNs


On our storage subsystem, we create two LUNs and mask the LUNs so that the SVC can see them. These two LUNs will eventually be given directly to the host, removing the volumes that it currently has. To check that the SVC can use them, issue the svctask detectmdisk command as shown in Example 6-46.
Example 6-46 Discover the new MDisks

IBM_2145:ITSO-CLS1:admin>svctask detectmdisk IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 23 IBMESX-MD1 online managed MDG_ESX_VD 55.0GB 000000000000000D DS4500 600a0b8000174233000000b4486d250300000000000000000000000000000000 24 IBMESX-MD2 online managed MDG_ESX_VD 55.0GB 000000000000000E DS4500 600a0b800017443100000108486d182c00000000000000000000000000000000 25 IBMESX-MD3 online managed MDG_ESX_VD 55.0GB 000000000000000F DS4500 600a0b8000174233000000b5486d255b00000000000000000000000000000000 26 mdisk26 online unmanaged 120.0GB 000000000000000A DS4700 600a0b800026b282000041f0486e210100000000000000000000000000000000 27 mdisk27 online unmanaged 100.0GB 000000000000000B DS4700 600a0b800026b282000041e3486e20cf00000000000000000000000000000000 4

Even though the MDisks will not stay in the SVC for long, we suggest that you rename them to more meaningful names so that they do not get confused with other MDisks being used by other activities. We also the storage pools to hold our new MDisks. Example 6-47 shows these tasks.
Example 6-47 Rename the MDisks

IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name ESX_IVD_SLES mdisk26 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name ESX_IVD_W2K3 mdisk27 IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_IVD_ESX -ext 512 MDisk Group, id [5], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning

316

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

4 MDG_ESX_VD online 3 165.0GB 512 35.0GB 130.00GB 130.00GB 78 0 5 MDG_IVD_ESX online 0 512 0 0.00MB 0.00MB 0 IBM_2145:ITSO-CLS1:admin>

2 130.00GB 0 0.00MB 0 0

Our SVC environment is ready for the volume migration to image mode volumes.

6.7.6 Migrating the managed volumes to image mode volumes


While our ESX server is still running, we migrate the managed volumes onto the new MDisks using image mode volumes. The command to perform this action is the svctask migratetoimage command, which is shown in Example 6-48.
Example 6-48 Migrate the volumes to image mode volumes

IBM_2145:ITSO-CLS1:admin>svctask migratetoimage -vdisk ESX_SLES_IVD -mdisk ESX_IVD_SLES -mdiskgrp MDG_IVD_ESX IBM_2145:ITSO-CLS1:admin>svctask migratetoimage -vdisk ESX_W2k3_IVD -mdisk ESX_IVD_W2K3 -mdiskgrp MDG_IVD_ESX IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 23 IBMESX-MD1 online managed 4 MDG_ESX_VD 55.0GB 000000000000000D DS4500 600a0b8000174233000000b4486d250300000000000000000000000000000000 24 IBMESX-MD2 online managed 4 MDG_ESX_VD 55.0GB 000000000000000E DS4500 600a0b800017443100000108486d182c00000000000000000000000000000000 25 IBMESX-MD3 online managed 4 MDG_ESX_VD 55.0GB 000000000000000F DS4500 600a0b8000174233000000b5486d255b00000000000000000000000000000000 26 ESX_IVD_SLES online image 5 MDG_IVD_ESX 120.0GB 000000000000000A DS4700 600a0b800026b282000041f0486e210100000000000000000000000000000000 27 ESX_IVD_W2K3 online image 5 MDG_IVD_ESX 100.0GB 000000000000000B DS4700 600a0b800026b282000041e3486e20cf00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin>

During the migration, our ESX server is unaware that its data is being physically moved between storage subsystems. We can continue to run and continue to use the virtual machines that are running on the server. You can check the migration status with the svcinfo lsmigrate command, as shown in Example 6-49 on page 318.

Chapter 6. Data migration

317

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Example 6-49 The svcinfo lsmigrate command and output

IBM_2145:ITSO-CLS1:admin>svcinfo lsmigrate migrate_type Migrate_to_Image progress 2 migrate_source_vdisk_index 29 migrate_target_mdisk_index 27 migrate_target_mdisk_grp 5 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type Migrate_to_Image progress 12 migrate_source_vdisk_index 30 migrate_target_mdisk_index 26 migrate_target_mdisk_grp 5 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS1:admin> After the migration has completed, the image mode volumes are ready to be removed from the ESX server, and the real LUNs can be mapped and masked directly to the host using the storage subsystems tool.

6.7.7 Removing the LUNs from the SVC


Your ESX servers configuration determines in what order your LUNs are removed from the control of the SVC, and whether you need to reboot the ESX server and suspend the VMware guests. In our example we have moved the virtual machine disks. Therefore, to remove these LUNs from the control of the SVC, we must stop and suspend all of the VMware guests that are using this LUN. The following steps must be performed: 1. Check which SCSI LUN IDs are assigned to the migrated disks by using the svcinfo lshostvdiskmap command, as shown in Example 6-50. Compare the volume UID and sort out the information.
Example 6-50 Note the SCSI LUN IDs

IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap id name SCSI_id vdisk_id wwpn vdisk_UID 1 Nile 0 30 210000E08B892BCD 60050768018301BF280000000000002A 1 Nile 1 29 210000E08B892BCD 60050768018301BF2800000000000029 IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk id name IO_group_id mdisk_grp_id mdisk_grp_name capacity FC_name RC_id RC_name copy_count 0 vdisk_A 0 2 MDG_Image 36.0GB 29 ESX_W2k3_IVD 0 4 MDG_ESX_VD 70.0GB striped 60050768018301BF2800000000000029 0 318
IBM System Storage SAN Volume Controller V6.3

vdisk_name ESX_SLES_IVD ESX_W2k3_IVD

IO_group_name status type FC_id vdisk_UID fc_map_count io_grp0 image io_grp0 1 online online

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

30 ESX_SLES_IVD 0 4 MDG_ESX_VD 60.0GB striped 60050768018301BF280000000000002A 0 IBM_2145:ITSO-CLS1:admin>

io_grp0 1

online

2. Shut down and suspend all guests using the LUNs. You can use the same method that is used in Moving VMware guest LUNs on page 309 to identify the guests that are using this LUN. 3. Remove the volumes from the host by using the svctask rmvdiskhostmap command (Example 6-51). To double-check that the volumes have been removed use the svcinfo lshostvdiskmap command, which shows that these volumes are no longer mapped to the ESX server.
Example 6-51 Remove the volumes from the host

IBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Nile ESX_W2k3_IVD IBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Nile ESX_SLES_IVD 4. Remove the volumes from the SVC by using the svctask rmvdisk command, which makes the MDisks unmanaged, as shown in Example 6-52. Cached data: When you run the svctask rmvdisk command, the SVC first double-checks that there is no outstanding dirty cached data for the volume that is being removed. If there is still uncommitted cached data, the command fails with this error message: CMMVC6212E The command failed because data in the cache has not been committed to disk You have to wait for this cached data to be committed to the underlying storage subsystem before you can remove the volume. The SVC will automatically destage uncommitted cached data two minutes after the last write activity for the volume. How much data there is to destage, and how busy the I/O subsystem is, determine how long this command takes to complete. You can check if the volume has uncommitted data in the cache by using the svcinfo lsvdisk <VDISKNAME> command and checking the fast_write_state attribute. This attribute has the following meanings: empty not_empty corrupt No modified data exists in the cache. Modified data might exist in the cache. Modified data might have existed in the cache, but the data has been lost.

Example 6-52 Remove the volumes from the SVC

IBM_2145:ITSO-CLS1:admin>svctask rmvdisk ESX_W2k3_IVD IBM_2145:ITSO-CLS1:admin>svctask rmvdisk ESX_SLES_IVD IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 26 ESX_IVD_SLES online unmanaged 120.0GB 000000000000000A DS4700 600a0b800026b282000041f0486e210100000000000000000000000000000000
Chapter 6. Data migration

319

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

27 ESX_IVD_W2K3 online unmanaged 100.0GB 000000000000000B DS4700 600a0b800026b282000041e3486e20cf00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin> 5. Using Storage Manager (our storage subsystem management tool), unmap and unmask the disks from the SVC back to the ESX server. Remember that in Example 6-50 on page 318, we recorded the SCSI LUN IDs. To map your LUNs on the storage subsystem, use the same SCSI LUN IDs that you used in the SVC. Important: This is the last step that you can perform and still safely back out of everything you have done so far. Up to this point, you can reverse all of the actions that you have performed so far to get the server back online without data loss: Remap and remask the LUNs back to the SVC. Run the svctask detectmdisk command to rediscover the MDisks. Recreate the volumes with the svctask mkvdisk command. Remap the volumes back to the server with the svctask mkvdiskhostmap command. After you start the next step, you might not be able to turn back without the risk of data loss. 6. Using the VMware management console, rescan to discover the new volume. Figure 6-89 shows the view before the rescan. Figure 6-90 on page 321 shows the view after the rescan. Note that the size of the LUN has changed, because we have moved to another LUN on another storage subsystem.

Figure 6-89 Before adapter rescan

320

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

Figure 6-90 After adapter rescan

During the rescan, you can receive geometry errors when ESX discovers that the old disk has disappeared. Your volume will appear with a new vmhba address, and VMware will recognize it as our VMWARE-GUESTS disk. 7. We are now ready to restart the VMware guests. 8. Finally, to make sure that the MDisks are removed from the SVC, run the svctask detectmdisk command. The MDisks are discovered as offline and then automatically removed when the SVC determines that there are no volumes associated with these MDisks.

6.8 Migrating AIX SAN disks to SVC volumes


In this section we describe how to move the two LUNs from an AIX server, which is directly off our DS4000 storage subsystem, over to the SVC. We manage those LUNs with the SVC, move them between other managed disks, and then finally move them back to image mode disks so that those LUNs can then be masked and mapped back to the AIX server directly. Using this example can help you to perform any of the following activities in your environment: Move an AIX servers SAN LUNs from a storage subsystem and virtualize those same LUNs through the SVC, which is the first activity that you perform when introducing the SVC into your environment. This section shows that your host downtime is only a few minutes while you remap and remask disks using your storage subsystem LUN management tool. This step starts in 6.8.2, Preparing your SVC to virtualize disks on page 324. Move data between storage subsystems while your AIX server is still running and servicing your business application. You can perform this activity if you are removing a storage subsystem from your SAN environment and you want to move the data onto LUNs that are more appropriate for the

Chapter 6. Data migration

321

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

type of data that is stored on those LUNs, taking into account availability, performance, and redundancy. We describe this step in 6.8.4, Migrating image mode volumes to volumes on page 331. Move your AIX servers LUNs back to image mode volumes, so that they can be remapped and remasked directly back to the AIX server. This step starts in 6.8.5, Preparing to migrate from the SVC on page 333. Use these activities individually or together to migrate your AIX servers LUNs from one storage subsystem to another storage subsystem by using the SVC as your migration tool. If you do not use all three activities, you can introduce or remove the SVC from your environment. The only downtime that is required for these activities is the time that it takes you to remask and remap the LUNs between the storage subsystems and your SVC. We show our AIX environment in Figure 6-91.

Zoning for migration scenarios AIX Host

SAN

IBM or OEM Storage Subsystem

Green Zone

Figure 6-91 AIX SAN environment

Figure 6-91 shows our AIX server connected to our SAN infrastructure. It has two LUNs (hdisk3 and hdisk4) that are masked directly to it from our storage subsystem. The hdisk3 disk makes up the itsoaixvg LVM group, and the hdisk4 disk makes up the itsoaixvg1 LVM group, as shown in Example 6-53 on page 323.

322

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

Example 6-53 AIX SAN configuration

#lsdev hdisk0 hdisk1 hdisk2 hdisk3 hdisk4 #lspv hdisk0 hdisk1 hdisk2 hdisk3 hdisk4 #

-Cc disk Available Available Available Available Available

1S-08-00-8,0 1S-08-00-9,0 1S-08-00-10,0 1D-08-02 1D-08-02

16 Bit LVD SCSI Disk Drive 16 Bit LVD SCSI Disk Drive 16 Bit LVD SCSI Disk Drive 1814 DS4700 Disk Array Device 1814 DS4700 Disk Array Device rootvg rootvg rootvg itsoaixvg itsoaixvg1 active active active active active

0009cddaea97bf61 0009cdda43c9dfd5 0009cddabaef1d99 0009cdda0a4c0dd5 0009cdda0a4d1a64

Our AIX server represents a typical SAN environment with a host directly using LUNs that were created on a SAN storage subsystem, as shown in Figure 6-91 on page 322: The AIX servers HBA cards are zoned so that they are in the Green (dotted line) Zone with our storage subsystem. The two LUNs, hdisk3 and hdisk4, have been defined on the storage subsystem. Using LUN masking, they are directly available to our AIX server.

6.8.1 Connecting the SVC to your SAN fabric


This section describes the steps to take to introduce the SVC into your SAN environment. Although this section only summarizes these activities, you can accomplish this task without any downtime to any host or application that also uses your storage area network. If you have an SVC already connected, skip to 6.8.2, Preparing your SVC to virtualize disks on page 324. Attention: Be extremely careful when connecting the SVC to your storage area network, because this requires you to connect cables to your SAN switches and to alter your switch zone configuration. Performing these activities incorrectly can render your SAN inoperable, so make sure that you fully understand the effect of your actions. Connecting the SVC to your SAN fabric will require you to perform these tasks: Assemble your SVC components (nodes, uninterruptible power supply unit, and Master Console), cable the SVC correctly, power the SVC on, and verify that the SVC is visible on your SAN. Create and configure your SVC cluster. Create these additional zones: An SVC node zone (our Black Zone in Example 6-66 on page 333). A storage zone (our Red Zone). A host zone (our Blue Zone). Figure 6-92 on page 324 shows our environment.

Chapter 6. Data migration

323

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Zoning for migration scenarios AIX Host

SAN

SVC I/O grp0 SVC SVC

IBM or OEM Storage Subsystem

IBM or OEM Storage Subsystem

Green Zone Red Zone Blue Zone Black Zone

Figure 6-92 SAN environment with SVC attached

6.8.2 Preparing your SVC to virtualize disks


This section describes the preparatory tasks that we perform before taking our AIX server offline. These tasks are all nondisruptive activities and do not affect your SAN fabric or your existing SVC configuration (if you already have a production SVC in place).

Creating a storage pool


When we move the two AIX LUNs to the SVC, they are first used in image mode; therefore, we must create a storage pool to hold those disks. We must create an empty storage pool for these disks, using the commands in Example 6-54 on page 325. We name the storage pool to hold our LUNs aix_imgmdg.

324

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

Example 6-54 Create empty mdiskgroup

IBM_2145:ITSO-CLS2:admin>svctask mkmdiskgrp -name aix_imgmdg -ext 512 MDisk Group, id [7], successfully created IBM_2145:ITSO-CLS2:admin>svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning 7 aix_imgmdg online 512 0 0.00MB 0 IBM_2145:ITSO-CLS2:admin> 0 0.00MB 0 0.00MB 0 0

Creating our host definition


If you have prepared the zones correctly, the SVC can see the AIX servers HBA adapters on the fabric (our host only had one HBA). First, we get the WWN for our AIX servers HBA, because we have many hosts that are connected to our SAN fabric and in the Blue Zone. We want to make sure we have the correct WWN to reduce our AIX servers downtime. Example 6-55 shows the commands to get the WWN; our host has a WWN of 10000000C932A7FB.
Example 6-55 Discover your WWN

#lsdev -Ccadapter|grep fcs fcs0 Available 1Z-08 FC Adapter fcs1 Available 1D-08 FC Adapter #lscfg -vpl fcs0 fcs0 U0.1-P2-I4/Q1 FC Adapter Part Number.................00P4494 EC Level....................A Serial Number...............1E3120A68D Manufacturer................001E Device Specific.(CC)........2765 FRU Number.................. 00P4495 Network Address.............10000000C932A7FB ROS Level and ID............02C03951 Device Specific.(Z0)........2002606D Device Specific.(Z1)........00000000 Device Specific.(Z2)........00000000 Device Specific.(Z3)........03000909 Device Specific.(Z4)........FF401210 Device Specific.(Z5)........02C03951 Device Specific.(Z6)........06433951 Device Specific.(Z7)........07433951 Device Specific.(Z8)........20000000C932A7FB Device Specific.(Z9)........CS3.91A1 Device Specific.(ZA)........C1D3.91A1 Device Specific.(ZB)........C2D3.91A1 Device Specific.(YL)........U0.1-P2-I4/Q1

PLATFORM SPECIFIC

Chapter 6. Data migration

325

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Name: fibre-channel Model: LP9002 Node: fibre-channel@1 Device Type: fcp Physical Location: U0.1-P2-I4/Q1 #lscfg -vpl fcs1 fcs1 U0.1-P2-I5/Q1 FC Adapter Part Number.................00P4494 EC Level....................A Serial Number...............1E3120A67B Manufacturer................001E Device Specific.(CC)........2765 FRU Number.................. 00P4495 Network Address.............10000000C932A800 ROS Level and ID............02C03891 Device Specific.(Z0)........2002606D Device Specific.(Z1)........00000000 Device Specific.(Z2)........00000000 Device Specific.(Z3)........02000909 Device Specific.(Z4)........FF401050 Device Specific.(Z5)........02C03891 Device Specific.(Z6)........06433891 Device Specific.(Z7)........07433891 Device Specific.(Z8)........20000000C932A800 Device Specific.(Z9)........CS3.82A1 Device Specific.(ZA)........C1D3.82A1 Device Specific.(ZB)........C2D3.82A1 Device Specific.(YL)........U0.1-P2-I5/Q1

PLATFORM SPECIFIC Name: fibre-channel Model: LP9000 Node: fibre-channel@1 Device Type: fcp Physical Location: U0.1-P2-I5/Q1 ## The svcinfo lshbaportcandidate command on the SVC lists all of the WWNs, which have not yet been allocated to a host, that the SVC can see on the SAN fabric. Example 6-56 shows the output of the nodes that it found in our SAN fabric. (If the port did not show up, it indicates a zone configuration problem.)
Example 6-56 Add the host to the SVC

IBM_2145:ITSO-CLS2:admin>svcinfo lshbaportcandidate id 10000000C932A7FB 10000000C932A800 210000E08B89B8C0 IBM_2145:ITSO-CLS2:admin>

326

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

After verifying that the SVC can see our host (Kanaga), we create the host entry and assign the WWN to this entry, as shown with the commands in Example 6-57.
Example 6-57 Create the host entry

IBM_2145:ITSO-CLS2:admin>svctask mkhost -name Kanaga -hbawwpn 10000000C932A7FB:10000000C932A800 Host, id [5], successfully created IBM_2145:ITSO-CLS2:admin>svcinfo lshost Kanaga id 5 name Kanaga port_count 2 type generic mask 1111 iogrp_count 4 WWPN 10000000C932A800 node_logged_in_count 2 state inactive WWPN 10000000C932A7FB node_logged_in_count 2 state inactive IBM_2145:ITSO-CLS2:admin>

Verifying that we can see our storage subsystem


If we performed the zoning correctly, the SVC can see the storage subsystem with the svcinfo lscontroller command (Example 6-58).
Example 6-58 Discover the storage controller

IBM_2145:ITSO-CLS2:admin>svcinfo lscontroller id controller_name ctrl_s/n vendor_id product_id_low product_id_high 0 DS4500 IBM 1742-900 1 DS4700 IBM 1814 IBM_2145:ITSO-CLS2:admin> Names: The svctask chcontroller command enables you to change the discovered storage subsystem name in SVC. In complex SANs, we suggest that you rename your storage subsystem to a more meaningful name.

Getting the disk serial numbers


To help avoid the risk of creating the wrong volumes from all of the available unmanaged MDisks (in case there are many available unmanaged MDisks that are seen by the SVC), we obtain the LUN serial numbers from our storage subsystem administration tool (Storage Manager). When we discover these MDisks, we confirm that we have the correct serial numbers before we create the image mode volumes. If you also use a DS4000 family storage subsystem, Storage Manager will provide the LUN serial numbers. Right-click your logical drive and choose Properties. The following figures show our serial numbers. Figure 6-93 on page 328 shows disk serial number kanage_lun0.

Chapter 6. Data migration

327

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 6-93 Obtaining disk serial number - kanage_lun0

Figure 6-94 shows disk serial number kanage_Lun1.

Figure 6-94 Obtaining disk serial number - kanga_Lun1

We are ready to move the ownership of the disks to the SVC, discover them as MDisks, and give them back to the host as volumes.

328

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

6.8.3 Moving the LUNs to the SVC


In this step, we move the LUNs that are assigned to the AIX server and reassign them to the SVC. Because we only want to move the LUN that holds our application and data files, we move that LUN without rebooting the host. The only requirement is that we unmount the file system and vary off the VG to ensure data integrity after the reassignment. Before you start: Moving LUNs to the SVC requires that the Subsystem Device Driver (SDD) device driver is installed on the AIX server. You can install the SDD ahead of time; however, it might require an outage of your host to do so. The following steps are required because we intend to move both LUNs at the same time. 1. Confirm that the SDD is installed. 2. Unmount and vary off the VGs: a. Stop the applications that are using the LUNs. b. Unmount those file systems with the umount MOUNT_POINT command. c. If the file systems are an LVM volume, deactivate that VG with the varyoffvg VOLUMEGROUP_NAME command. Example 6-59 shows the commands that we ran on Kanaga.
Example 6-59 AIX command sequence

#varyoffvg itsoaixvg #varyoffvg itsoaixvg1 #lsvg rootvg itsoaixvg itsoaixvg1 #lsvg -o rootvg 3. Using Storage Manager (our storage subsystem management tool), we can unmap and unmask the disks from the AIX server and remap and remask the disks to the SVC. 4. From the SVC, discover the new disks with the svctask detectmdisk command. The disks will be discovered and named mdiskN, where N is the next available mdisk number (starting from 0). Example 6-60 shows the commands that we used to discover our MDisks and to verify that we have the correct MDisks.
Example 6-60 Discover the new MDisks

IBM_2145:ITSO-CLS2:admin>svctask detectmdisk IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk id name mdisk_grp_id mdisk_grp_name controller_name UID

status capacity

mode ctrl_LUN_#

24 mdisk24 online unmanaged 5.0GB 0000000000000008 DS4700 600a0b800026b282000043224874f41900000000000000000000000000000000

Chapter 6. Data migration

329

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

25 mdisk25 online unmanaged 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c00000000000000000000000000000000 IBM_2145:ITSO-CLS2:admin> Important: Match your discovered MDisk serial numbers (UID on the svcinfo lsmdisk command task display) with the serial number that you discovered earlier (in Figure 6-93 and Figure 6-94 on page 328). 5. After we have verified that we have the correct MDisks, we rename them to avoid confusion in the future when we perform other MDisk-related tasks (Example 6-61).
Example 6-61 Rename the MDisks

IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name Kanaga_AIX mdisk24 IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name Kanaga_AIX1 mdisk25 IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 5.0GB 0000000000000008 DS4700 600a0b800026b282000043224874f41900000000000000000000000000000000 25 Kanaga_AIX1 online unmanaged 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c00000000000000000000000000000000 IBM_2145:ITSO-CLS2:admin> 6. We create our image mode volumes with the svctask mkvdisk command and the option -vtype image (Example 6-62). This command virtualizes the disks in the exact same layout as though they were not virtualized.
Example 6-62 Create the image mode volumes

IBM_2145:ITSO-CLS2:admin>svctask mkvdisk -mdiskgrp aix_imgmdg -iogrp 0 -vtype image -mdisk Kanaga_AIX -name IVD_Kanaga Virtual Disk, id [8], successfully created IBM_2145:ITSO-CLS2:admin>svctask mkvdisk -mdiskgrp aix_imgmdg -iogrp 0 -vtype image -mdisk Kanaga_AIX1 -name IVD_Kanaga1 Virtual Disk, id [9], successfully created IBM_2145:ITSO-CLS2:admin> 7. Finally, we can map the new image mode volumes to the host (Example 6-63).
Example 6-63 Map the volumes to the host

IBM_2145:ITSO-CLS2:admin>svctask mkvdiskhostmap -host Kanaga IVD_Kanaga Virtual Disk to Host map, id [0], successfully created IBM_2145:ITSO-CLS2:admin>svctask mkvdiskhostmap -host Kanaga IVD_Kanaga1 Virtual Disk to Host map, id [1], successfully created IBM_2145:ITSO-CLS2:admin> FlashCopy: While the application is in a quiescent state, you can choose to use FlashCopy to copy the new image volumes onto other volumes. You do not need to wait until the FlashCopy process has completed before starting your application.

330

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

Now, we are ready to perform the following steps to put the image mode volumes online: 1. Remove the old disk definitions, if you have not done so already. 2. Run the cfgmgr -vs command to rediscover the available LUNs. 3. If your application and data are on an LVM volume, rediscover the VG, and then, run the varyonvg VOLUME_GROUP command to activate the VG. 4. Mount your file systems with the mount /MOUNT_POINT command. 5. You are ready to start your application.

6.8.4 Migrating image mode volumes to volumes


While the AIX server is still running and our file systems are in use, we migrate the image mode volumes onto striped volumes, with the extents being spread over three other MDisks.

Preparing MDisks for striped mode volumes


From our storage subsystem, we performed these tasks: Created and allocated three LUNs to the SVC Discovered them as MDisks Renamed these LUNs to more meaningful names Created a new storage pool Put all these MDisks into this storage pool You can see the output of our commands in Example 6-64.
Example 6-64 Create a new storage pool

IBM_2145:ITSO-CLS2:admin>svctask mkmdiskgrp -name aix_vd -ext 512 IBM_2145:ITSO-CLS2:admin>svctask detectmdisk IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 24 Kanaga_AIX online image 7 aix_imgmdg 5.0GB 0000000000000008 DS4700 600a0b800026b282000043224874f41900000000000000000000000000000000 25 Kanaga_AIX1 online image 7 aix_imgmdg 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c00000000000000000000000000000000 26 mdisk26 online unmanaged 6.0GB 000000000000000A DS4700 600a0b800026b2820000439c48751ddc00000000000000000000000000000000 27 mdisk27 online unmanaged 6.0GB 000000000000000B DS4700 600a0b800026b2820000438448751da900000000000000000000000000000000 28 mdisk28 online unmanaged 6.0GB 000000000000000C DS4700 600a0b800026b2820000439048751dc200000000000000000000000000000000 IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name aix_vd0 mdisk26 IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name aix_vd1 mdisk27 IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name aix_vd2 mdisk28 IBM_2145:ITSO-CLS2:admin> IBM_2145:ITSO-CLS2:admin>svctask addmdisk -mdisk aix_vd0 aix_vd IBM_2145:ITSO-CLS2:admin>svctask addmdisk -mdisk aix_vd1 aix_vd IBM_2145:ITSO-CLS2:admin>svctask addmdisk -mdisk aix_vd2 aix_vd
Chapter 6. Data migration

331

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 24 Kanaga_AIX online image 7 aix_imgmdg 5.0GB 0000000000000008 DS4700 600a0b800026b282000043224874f41900000000000000000000000000000000 25 Kanaga_AIX1 online image 7 aix_imgmdg 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c00000000000000000000000000000000 26 aix_vd0 online managed 6 aix_vd 6.0GB 000000000000000A DS4700 600a0b800026b2820000439c48751ddc00000000000000000000000000000000 27 aix_vd1 online managed 6 aix_vd 6.0GB 000000000000000B DS4700 600a0b800026b2820000438448751da900000000000000000000000000000000 28 aix_vd2 online managed 6 aix_vd 6.0GB 000000000000000C DS4700 600a0b800026b2820000439048751dc200000000000000000000000000000000 IBM_2145:ITSO-CLS2:admin>

Migrating the volumes


We are ready to migrate the image mode volumes onto striped volumes with the svctask migratevdisk command (Example 6-24 on page 295). While the migration is running, our AIX server is still running and we can continue accessing the files. To check the overall progress of the migration we use the svcinfo lsmigrate command, as shown in Example 6-65. Listing the storage pool with the svcinfo lsmdiskgrp command shows that the free capacity on the old storage pool is slowly increasing while those extents are moved to the new storage pool.
Example 6-65 Migrating image mode volumes to striped volumes

IBM_2145:ITSO-CLS2:admin>svctask migratevdisk -vdisk IVD_Kanaga -mdiskgrp aix_vd IBM_2145:ITSO-CLS2:admin>svctask migratevdisk -vdisk IVD_Kanaga1 -mdiskgrp aix_vd IBM_2145:ITSO-CLS2:admin>svcinfo lsmigrate migrate_type MDisk_Group_Migration progress 10 migrate_source_vdisk_index 8 migrate_target_mdisk_grp 6 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type MDisk_Group_Migration progress 0 migrate_source_vdisk_index 9 migrate_target_mdisk_grp 6 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS2:admin> After this task has completed, Example 6-66 on page 333 shows that the volumes are spread over three MDisks in the aix_vd storage pool. The old storage pool is empty. 332
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

Example 6-66 Migration complete

IBM_2145:ITSO-CLS2:admin>svcinfo lsmdiskgrp aix_vd id 6 name aix_vd status online mdisk_count 3 vdisk_count 2 capacity 18.0GB extent_size 512 free_capacity 5.0GB virtual_capacity 13.00GB used_capacity 13.00GB real_capacity 13.00GB overallocation 72 warning 0 IBM_2145:ITSO-CLS2:admin>svcinfo lsmdiskgrp aix_imgmdg id 7 name aix_imgmdg status online mdisk_count 2 vdisk_count 0 capacity 13.0GB extent_size 512 free_capacity 13.0GB virtual_capacity 0.00MB used_capacity 0.00MB real_capacity 0.00MB overallocation 0 warning 0 IBM_2145:ITSO-CLS2:admin> Our migration to the SVC is complete. You can remove the original MDisks from the SVC, and you can remove these LUNs from the storage subsystem. If these LUNs are the LUNs that were used last on our storage subsystem, we can remove it from our SAN fabric.

6.8.5 Preparing to migrate from the SVC


Before we change the AIX servers LUNs from being accessed by the SVC as volumes to being directly accessed from the storage subsystem, we need to convert the volumes into image mode volumes. You can perform this activity for one of these reasons: You purchased a new storage subsystem, and you were using the SVC as a tool to migrate from your old storage subsystem to this new storage subsystem. You used the SVC to FlashCopy or Metro Mirror a volume onto another volume and you no longer need that host connected to the SVC. You want move a host and its data that is currently connected to the SVC to a site where there is no SVC. Changes to your environment no longer require this host to use the SVC.

Chapter 6. Data migration

333

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

There are other preparatory activities to be performed before we shut down the host and reconfigure the LUN masking and mapping. This section covers those activities. If you are moving the data to a new storage subsystem, it is assumed that this storage subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches. Your environment must look similar to our environment, as shown in Figure 6-95.

Zoning for migration scenarios AIX Host

SAN

SVC I/O grp0 SVC SVC

IBM or OEM Storage Subsystem

IBM or OEM Storage Subsystem

Green Zone Red Zone Blue Zone Black Zone

Figure 6-95 Environment with SVC

Making fabric zone changes


The first step is to set up the SAN configuration so that all of the zones are created. Add the new storage subsystem to the Red Zone, so that the SVC can communicate with it directly. Create a Green Zone for our host to use when we are ready for it to directly access the disk, after it has been removed from the SVC. It is assumed that you have created the necessary zones. After your zone configuration is set up correctly, the SVC sees the new storage subsystems controller by using the svcinfo lscontroller command, as shown in Example 6-67 on page 335. It is also useful to rename the controller to a more meaningful name. You can use the svctask chcontroller -name command.

334

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

Example 6-67 Discovering the new storage subsystem

IBM_2145:ITSO-CLS2:admin>svcinfo lscontroller id controller_name ctrl_s/n vendor_id product_id_low product_id_high 0 DS4500 IBM 1742-900 1 DS4700 IBM 1814 FAStT IBM_2145:ITSO-CLS2:admin>

Creating new LUNs


On our storage subsystem, we created two LUNs and masked them so that the SVC can see them. We will eventually give these LUNs directly to the host, removing the volumes that it currently has. To check that the SVC can use the LUNs, issue the svctask detectmdisk command, as shown in Example 6-68. In our example, we use two 10 GB LUNs that are located on the DS4500 subsystem. Thus, in this step, we migrate back to image mode volumes and to another subsystem in one step. We have already deleted the old LUNs on the DS4700 storage subsystem, which is the reason why they appear offline here.
Example 6-68 Discover the new MDisks

IBM_2145:ITSO-CLS2:admin>svctask detectmdisk IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 24 Kanaga_AIX offline managed 7 aix_imgmdg 5.0GB 0000000000000008 DS4700 600a0b800026b282000043224874f41900000000000000000000000000000000 25 Kanaga_AIX1 offline managed 7 aix_imgmdg 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c00000000000000000000000000000000 26 aix_vd0 online managed 6 aix_vd 6.0GB 000000000000000A DS4700 600a0b800026b2820000439c48751ddc00000000000000000000000000000000 27 aix_vd1 online managed 6 aix_vd 6.0GB 000000000000000B DS4700 600a0b800026b2820000438448751da900000000000000000000000000000000 28 aix_vd2 online managed 6 aix_vd 6.0GB 000000000000000C DS4700 600a0b800026b2820000439048751dc200000000000000000000000000000000 29 mdisk29 online unmanaged 10.0GB 0000000000000010 DS4500 600a0b8000174233000000b84876512f00000000000000000000000000000000 30 mdisk30 online unmanaged 10.0GB 0000000000000011 DS4500 600a0b80001744310000010e4876444600000000000000000000000000000000 IBM_2145:ITSO-CLS2:admin> Even though the MDisks will not stay in the SVC for long, we suggest that you rename them to more meaningful names so that they do not get confused with other MDisks that are used by other activities. Also, we create the storage pools to hold our new MDisks, as shown in Example 6-69 on page 336.

Chapter 6. Data migration

335

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Example 6-69 Rename the MDisks

IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name AIX_MIG mdisk29 IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name AIX_MIG1 mdisk30 IBM_2145:ITSO-CLS2:admin>svctask mkmdiskgrp -name KANAGA_AIXMIG -ext 512 MDisk Group, id [3], successfully created IBM_2145:ITSO-CLS2:admin>svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning 3 KANAGA_AIXMIG online 0 0 0 512 0 0.00MB 0.00MB 0.00MB 0 0 6 aix_vd online 3 2 18.0GB 512 5.0GB 13.00GB 13.00GB 13.00GB 72 0 7 aix_imgmdg offline 2 0 13.0GB 512 13.0GB 0.00MB 0.00MB 0.00MB 0 0 IBM_2145:ITSO-CLS2:admin>

At this point, our SVC environment is ready for the volume migration to image mode volumes.

6.8.6 Migrating the managed volumes


While our AIX server is still running, we migrate the managed volumes onto the new MDisks using image mode volumes. The command to perform this action is the svctask migratetoimage command, which is shown in Example 6-70.
Example 6-70 Migrate the volumes to image mode volumes

IBM_2145:ITSO-CLS2:admin>svctask migratetoimage -vdisk IVD_Kanaga -mdisk AIX_MIG -mdiskgrp KANAGA_AIXMIG IBM_2145:ITSO-CLS2:admin>svctask migratetoimage -vdisk IVD_Kanaga1 -mdisk AIX_MIG1 -mdiskgrp KANAGA_AIXMIG IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 24 Kanaga_AIX offline managed 7 aix_imgmdg 5.0GB 0000000000000008 DS4700 600a0b800026b282000043224874f41900000000000000000000000000000000 25 Kanaga_AIX1 offline managed 7 aix_imgmdg 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c00000000000000000000000000000000 26 aix_vd0 online managed 6 aix_vd 6.0GB 000000000000000A DS4700 600a0b800026b2820000439c48751ddc00000000000000000000000000000000 27 aix_vd1 online managed 6 aix_vd 6.0GB 000000000000000B DS4700 600a0b800026b2820000438448751da900000000000000000000000000000000 28 aix_vd2 online managed 6 aix_vd 6.0GB 000000000000000C DS4700 600a0b800026b2820000439048751dc200000000000000000000000000000000

336

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

29 AIX_MIG online image KANAGA_AIXMIG 10.0GB 0000000000000010 DS4500 600a0b8000174233000000b84876512f00000000000000000000000000000000 30 AIX_MIG1 online image KANAGA_AIXMIG 10.0GB 0000000000000011 DS4500 600a0b80001744310000010e4876444600000000000000000000000000000000 IBM_2145:ITSO-CLS2:admin>svcinfo lsmigrate migrate_type Migrate_to_Image progress 50 migrate_source_vdisk_index 9 migrate_target_mdisk_index 30 migrate_target_mdisk_grp 3 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type Migrate_to_Image progress 50 migrate_source_vdisk_index 8 migrate_target_mdisk_index 29 migrate_target_mdisk_grp 3 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS2:admin>

During the migration, our AIX server is unaware that its data is being physically moved between storage subsystems. After the migration is complete, the image mode volumes are ready to be removed from the AIX server, and the real LUNs can be mapped and masked directly to the host by using the storage subsystems tool.

6.8.7 Removing the LUNs from the SVC


The next step will require downtime while we remap and remask the disks so that the host sees them directly through the Green Zone. Because our LUNs only hold data files, and because we use a unique VG, we can remap and remask the disks without rebooting the host. The only requirement is that we unmount the file system and vary off the VG to ensure data integrity after the reassignment. Before you start: Moving LUNs to another storage system might need a driver other than SDD. Check with the storage subsystems vendor to see which driver you will need. You might be able to install this driver ahead of time. Follow these required steps to remove the SVC. 1. Confirm that the correct device driver for the new storage subsystem is loaded. Because we are moving to a DS4500, we can continue to use the SDD. 2. Shut down any applications and unmount the file systems: a. Stop the applications that are using the LUNs. b. Unmount those file systems with the umount MOUNT_POINT command. c. If the file systems are an LVM volume, deactivate that VG with the varyoffvg VOLUMEGROUP_NAME command.

Chapter 6. Data migration

337

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

3. Remove the volumes from the host by using the svctask rmvdiskhostmap command (Example 6-71). To double-check that you have removed the volumes, use the svcinfo lshostvdiskmap command, which shows that these disks are no longer mapped to the AIX server.
Example 6-71 Remove the volumes from the host

IBM_2145:ITSO-CLS2:admin>svctask rmvdiskhostmap -host Kanaga IVD_Kanaga IBM_2145:ITSO-CLS2:admin>svctask rmvdiskhostmap -host Kanaga IVD_Kanaga1 IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Kanaga IBM_2145:ITSO-CLS2:admin> 4. Remove the volumes from the SVC by using the svctask rmvdisk command, which will make the MDisks unmanaged, as shown in Example 6-72. Cached data: When you run the svctask rmvdisk command, the SVC first double-checks that there is no outstanding dirty cached data for the volume being removed. If uncommitted cached data still exists, the command fails with the following error message: CMMVC6212E The command failed because data in the cache has not been committed to disk You will have to wait for this cached data to be committed to the underlying storage subsystem before you can remove the volume. The SVC will automatically destage uncommitted cached data two minutes after the last write activity for the volume. How much data there is to destage, and how busy the I/O subsystem is, determine how long this command takes to complete. You can check whether the volume has uncommitted data in the cache by using the svcinfo lsvdisk <VDISKNAME> command and checking the fast_write_state attribute. This attribute has the following meanings: empty not_empty corrupt No modified data exists in the cache. Modified data might exist in the cache. Modified data might have existed in the cache, but any modified data has been lost.

Example 6-72 Remove the volumes from the SVC

IBM_2145:ITSO-CLS2:admin>svctask rmvdisk IVD_Kanaga IBM_2145:ITSO-CLS2:admin>svctask rmvdisk IVD_Kanaga1 IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 29 AIX_MIG online unmanaged 10.0GB 0000000000000010 DS4500 600a0b8000174233000000b84876512f00000000000000000000000000000000 30 AIX_MIG1 online unmanaged 10.0GB 0000000000000011 DS4500 600a0b80001744310000010e4876444600000000000000000000000000000000 IBM_2145:ITSO-CLS2:admin>

338

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

5. Using Storage Manager (our storage subsystem management tool), unmap and unmask the disks from the SVC back to the AIX server. Important: This step is the last step that you can perform and still safely back out of everything you have done so far. Up to this point, you can reverse all of the actions that you have performed so far to get the server back online without data loss: Remap and remask the LUNs back to the SVC. Run the svctask detectmdisk command to rediscover the MDisks. Recreate the volumes with the svctask mkvdisk command. Remap the volumes back to the server with the svctask mkvdiskhostmap command. After you start the next step, you might not be able to turn back without the risk of data loss. We are ready to access the LUNs from the AIX server. If all of the zoning and LUN masking and mapping were done successfully, our AIX server will boot as though nothing has happened: 1. Run the cfgmgr -S command to discover the storage subsystem. 2. Use the lsdev -Ccdisk command to verify the discovery of the new disk. 3. Remove the references to all of the old disks. Example 6-73 shows the removal using SDD and Example 6-74 on page 340 shows the removal using SDDPCM.
Example 6-73 Remove references to old paths using SDD

#lsdev -Cc disk hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drive hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drive hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drive hdisk3 Available 1Z-08-02 1742-900 (900) Disk Array Device hdisk4 Available 1Z-08-02 1742-900 (900) Disk Array Device hdisk5 Defined 1Z-08-02 SAN volume Controller Device hdisk6 Defined 1Z-08-02 SAN volume Controller Device hdisk7 Defined 1D-08-02 SAN volume Controller Device hdisk8 Defined 1D-08-02 SAN volume Controller Device hdisk10 Defined 1Z-08-02 SAN volume Controller Device hdisk11 Defined 1Z-08-02 SAN volume Controller Device hdisk12 Defined 1D-08-02 SAN volume Controller Device hdisk13 Defined 1D-08-02 SAN volume Controller Device vpath0 Defined Data Path Optimizer Pseudo Device Driver vpath1 Defined Data Path Optimizer Pseudo Device Driver vpath2 Defined Data Path Optimizer Pseudo Device Driver # for i in 5 6 7 8 10 11 12 13; do rmdev -dl hdisk$i -R;done hdisk5 deleted hdisk6 deleted hdisk7 deleted hdisk8 deleted hdisk10 deleted hdisk11 deleted hdisk12 deleted hdisk13 deleted #for i in 0 1 2; do rmdev -dl vpath$i -R;done
Chapter 6. Data migration

339

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

vpath0 vpath1 vpath2 #lsdev hdisk0 hdisk1 hdisk2 hdisk3 hdisk4 #

deleted deleted deleted -Cc disk Available Available Available Available Available

1S-08-00-8,0 1S-08-00-9,0 1S-08-00-10,0 1Z-08-02 1Z-08-02

16 Bit LVD SCSI Disk Drive 16 Bit LVD SCSI Disk Drive 16 Bit LVD SCSI Disk Drive 1742-900 (900) Disk Array Device 1742-900 (900) Disk Array Device

Example 6-74 Remove references to old paths using SDDPCM

# lsdev -Cc disk hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI hdisk3 Defined 1D-08-02 MPIO FC 2145 hdisk4 Defined 1D-08-02 MPIO FC 2145 hdisk5 Available 1D-08-02 MPIO FC 2145 # for i in 3 4; do rmdev -dl hdisk$i -R;done hdisk3 deleted hdisk4 deleted # lsdev -Cc disk hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI hdisk5 Available 1D-08-02 MPIO FC 2145

Disk Drive Disk Drive Disk Drive

Disk Drive Disk Drive Disk Drive

4. If your application and data are on an LVM volume, rediscover the VG and then run the varyonvg VOLUME_GROUP command to activate the VG. 5. Mount your file systems with the mount /MOUNT_POINT command. 6. You are ready to start your application. Finally, to make sure that the MDisks are removed from the SVC, run the svctask detectmdisk command. The MDisks will first be discovered as offline and then they will automatically be removed after the SVC determines that there are no volumes associated with these MDisks.

6.9 Using SVC for storage migration


The primary use of the SVC is not as a storage migration tool. However, the advanced capabilities of the SVC enable us to use the SVC as a storage migration tool; therefore, you can add the SVC temporarily to your SAN environment to copy the data from one storage subsystem to another storage subsystem. The SVC enables you to copy image mode volumes directly from one subsystem to another subsystem while host I/O is running. The only downtime that is required is when the SVC is added to and removed from your SAN environment. To use the SVC for migration purposes only, perform the following steps: 1. Add the SVC to your SAN environment. 2. Prepare the SVC.

340

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

3. Depending on your operating system, unmount the selected LUNs or shut down the host. 4. Add the SVC between your storage and the host. 5. Mount the LUNs or start the host again. 6. Start the migration. 7. After the migration process is complete, unmount the selected LUNs or shut down the host. 8. Remove the SVC from your SAN. 9. Mount the LUNs, or start the host again. 10.The migration is complete. As you can see, extremely little downtime is required. If you prepare everything correctly, you are able to reduce your downtime to a few minutes. The copy process is handled by the SVC, so the host does not hinder the performance while the migration progresses. To use the SVC for storage migrations, perform the steps that are described in the following sections: 6.5.2, Adding the SVC between the host system and the LSI 3500 on page 241 6.5.6, Migrating the volume from image mode to image mode on page 268 6.5.7, Removing image mode data from the SVC on page 278

6.10 Using volume mirroring and thin-provisioned volumes together


In this section, we show that you can use the volume mirroring feature and thin-provisioned volumes together to move data from a fully allocated volume to a thin-provisioned volume.

6.10.1 Zero detect feature


The zero detect feature for thin-provisioned volumes enables clients to reclaim unused allocated disk space (zeros) when converting a fully allocated volume to a thin-provisioned volume using volume mirroring. To migrate from a fully allocated volume to a thin-provisioned volume, perform these steps: 1. Add the target thin-provisioned copy. 2. Wait for synchronization to complete. 3. Remove the source fully allocated copy. By using this feature, clients can easily free up managed disk space and make better use of their storage, without needing to purchase any additional function for the SVC. Volume mirroring and thin-provisioned volume functions are included in the base virtualization license. Clients with thin-provisioned storage on an existing storage system can migrate their data under SVC management using thin-provisioned volumes without having to allocate additional storage space. Zero detect only works if the disk actually contains zeroes; an uninitialized disk can contain anything, unless the disk has been formatted (for example, using the -fmtdisk flag on the mkvdisk command).

Chapter 6. Data migration

341

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 6-96 shows the thin-provisioned volume zero detect concept.

Figure 6-96 The thin-provisioned volume zero detect feature

Figure 6-97 shows the thin-provisioned volume organization.

Figure 6-97 The thin-provisioned volume organization

As shown in Figure 6-97, a thin-provisioned volume has these components: Used capacity This term specifies the portion of real capacity that is being used to store data. For non-thin-provisioned copies, this value is the same as the volume capacity. If the volume

342

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

copy is thin-provisioned, the value increases from zero to the real capacity value as more of the volume is written to. Real capacity This capacity is the real allocated space in the storage pool. In a thin-provisioned volume, this value can differ from the total capacity. Free data This value specifies the difference between the real capacity and the used capacity values. The SVC is continuously trying to keep equal to the real capacity for contingency. If the free data capacity reaches the used capacity and if the volume has been configured with the -autoexpand option, the SVC will autoexpand the allocated space for this volume to keep this value equal to the real capacity. Grains This value is the smallest unit in into which the allocated space can be divided. Metadata This value is allocated in the real capacity, and it tracks the used capacity, real capacity, and free capacity.

6.10.2 Volume mirroring with thin-provisioned volumes


In this section, we show an example of using the volume mirror feature with thin-provisioned volumes: 1. We create a fully allocated volume of 15 GB named VD_Full, as shown in Example 6-75.
Example 6-75 VD_Full creation example

IBM_2145:ITSO-CLS2:admin>svctask mkvdisk -mdiskgrp 0 -iogrp 0 -mdisk 0:1:2:3:4:5 -node 1 -vtype striped -size 15 -unit gb -fmtdisk -name VD_Full Virtual Disk, id [2], successfully created IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_Full id 2 name VD_Full IO_group_id 0 IO_group_name io_grp0 status offline mdisk_grp_id 0 mdisk_grp_name MDG_DS47 capacity 15.00GB type striped formatted yes . . vdisk_UID 60050768018401BF280000000000000B mdisk_grp_name MDG_DS47 used_capacity 15.00GB real_capacity 15.00GB free_capacity 0.00MB overallocation 100 2. We then add a thin-provisioned volume copy with the volume mirroring option by using the addvdiskcopy command and the autoexpand parameter, as shown in Example 6-76 on page 344.

Chapter 6. Data migration

343

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Example 6-76 addvdiskcopy command

IBM_2145:ITSO-CLS2:admin>svctask addvdiskcopy -mdiskgrp 1 -mdisk 6:7:8:9 -vtype striped -rsize 2% -autoexpand -grainsize 32 -unit gb VD_Full VDisk [2] copy [1] successfully created IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_Full id 2 name VD_Full IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id many mdisk_grp_name many capacity 15.00GB type many formatted yes mdisk_id many mdisk_name many vdisk_UID 60050768018401BF280000000000000B tsync_rate 50 copy_count 2 copy_id 0 sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS47 fused_capacity 15.00GB real_capacity 15.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize copy_id 1 status online sync no primary no mdisk_grp_id 1 mdisk_grp_name MDG_DS83 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 323.57MB free_capacity 323.17MB overallocation 4746 autoexpand on warning 80 grainsize 32 As you can see in Example 6-76, the VD_Full has a copy_id 1 where the used_capacity is 0.41 MB, which is equal to the metadata, because only zeros exist in the disk.

344

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

The real_capacity is 323.57 MB, which is equal to the -rsize 2% value that is specified in the addvdiskcopy command. The free capacity is 323.17 MB, which is equal to the real capacity minus the used capacity. If zeros are written on the disk, the thin-provisioned volume does not consume space. Example 6-77 shows that the thin-provisioned volume does not consume space even when they are in sync.
Example 6-77 Thin-provisioned volume display

IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisksyncprogress 2 vdisk_id vdisk_name copy_id progress estimated_completion_time 2 VD_Full 0 100 2 VD_Full 1 100 IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_Full id 2 name VD_Full IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id many mdisk_grp_name many capacity 15.00GB type many formatted yes mdisk_id many mdisk_name many vdisk_UID 60050768018401BF280000000000000B throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 2 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS47 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 15.00GB real_capacity 15.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize copy_id 1 status online

Chapter 6. Data migration

345

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

sync yes primary no mdisk_grp_id 1 mdisk_grp_name MDG_DS83 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 323.57MB free_capacity 323.17MB overallocation 4746 autoexpand on warning 80 grainsize 32 3. We can split the volume mirror or remove one of the copies, keeping the thin-provisioned copy as our valid copy by using the splitvdiskcopy command or the rmvdiskcopy command: If you need your copy as a thin-provisioned clone, we suggest that you use the splitvdiskcopy command because that command will generate a new volume and you will be able to map to any server that you want. If you need your copy because you are migrating from a previously fully allocated volume to go to a thin-provisioned volume without any effect on the server operations, we suggest that you use the rmvdiskcopy command. In this case, the original volume name is kept and it remains mapped to the same server. Example 6-78 shows the splitvdiskcopy command.
Example 6-78 splitvdiskcopy command

IBM_2145:ITSO-CLS2:admin>svctask splitvdiskcopy -copy 1 -name VD_SEV VD_Full Virtual Disk, id [7], successfully created IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk -filtervalue name=VD* id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state 2 VD_Full 0 io_grp0 online 0 MDG_DS47 15.00GB striped 60050768018401BF280000000000000B 0 1 empty 7 VD_SEV 0 io_grp0 online 1 MDG_DS83 15.00GB striped 60050768018401BF280000000000000D 0 1 empty IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_SEV id 7 name VD_SEV IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 1 mdisk_grp_name MDG_DS83 capacity 15.00GB type striped formatted no

346

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 06 Data Migration Torben.fm

vdisk_UID 60050768018401BF280000000000000D throttling 0 preferred_node_id 2 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 1 mdisk_grp_name MDG_DS83 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 323.57MB free_capacity 323.17MB overallocation 4746 autoexpand on warning 80 grainsize 32 Example 6-79 shows the rmvdiskcopy command.
Example 6-79 rmvdiskcopy command

IBM_2145:ITSO-CLS2:admin>svctask rmvdiskcopy -copy 0 VD_Full IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk -filtervalue name=VD* id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state 2 VD_Full 0 io_grp0 online 1 MDG_DS83 15.00GB striped 60050768018401BF280000000000000B 0 1 empty IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk 2 id 2 name VD_Full IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 1 mdisk_grp_name MDG_DS83 capacity 15.00GB type striped formatted no vdisk_UID 60050768018401BF280000000000000B throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite
Chapter 6. Data migration

347

7933 06 Data Migration Torben.fm

Draft Document for Review January 17, 2012 6:10 am

udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 1 status online sync yes primary yes mdisk_grp_id 1 mdisk_grp_name MDG_DS83 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 323.57MB free_capacity 323.17MB overallocation 4746 autoexpand on warning 80 grainsize 32

348

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 07 Easytier AB - noSSD.fm

Chapter 7.

Easy Tier
In this chapter we describe the function provided by the EasyTier disk performance optimization feature of the SAN Volume Controller. We also explain how to activate the EasyTier process for both evaluation purposes and for automatic extent migration.

Copyright IBM Corp. 2011. All rights reserved.

349

7933 07 Easytier AB - noSSD.fm

Draft Document for Review January 17, 2012 6:10 am

7.1 Overview of Easy Tier


Determining the amount of I/O activity occurring on an SVC extent and when to move the extent to an appropriate storage performance tier is usually too complex a task to manage manually. Easy Tier is a performance optimization function, free of charge, that overcomes this issue as it will automatically migrate or move extents belonging to a volume between MDisk storage tiers. Easy Tier continually monitors the IO activity and latency of the extents on the volumes in a storage pool. At least once every 24 hours it will evaluate the historical activity and create a migration plan based on this history. It will dynamically move high activity extents (Easy Tier will migrate other active extents not just the hot extents reported) to a higher disk tier within the storage pool. It will also move extents whose activity has dropped off, or cooled, from the high-tier MDisks back to a lower-tiered MDisk. Because this migration works at the extent level, it is often referred to as sub-LUN migration. The Easy Tier function can be turned on or off at the storage pool level and at the volume level. To experience the potential benefits of using Easy Tier in your environment before actually installing actually installing expensive solid-state disks (SSDs), you can turn on the Easy Tier function for a single level storage pool. Next, also turn on the Easy Tier function for the volumes within that pool. This will start monitoring activity on the volume extents in the pool. Even though Easy Tier extent migration is not possible within a single tier pool, the Easy Tier statistical measurement function is available. Note: Image mode and sequential volumes are not candidates for Easy Tier automatic data placement. They can only be measured.

7.2 Easy Tier concepts


This section explains the concepts underpinning Easy Tier functionality.

7.2.1 SSD arrays and MDisks


The SSD drives are treated no differently by the SVC than HDDs with respect to RAID arrays or MDisks. The individual SSD drives in the storage managed by the SVC will be combined into an array, usually in RAID10 or RAID5 format. It is unlikely that RAID6 SSD arrays will be used due to the double parity overhead, with two SSD logical drives used for parity only. An LUN will be created on the array, which is then presented to the SVC as a normal managed disk (MDisk). As is the case for HDDs, the SSD RAID array format will help protect against individual SSD failures. Depending on your requirements, additional high availability protection, above the RAID level, can be achieved by using volume mirroring. In the example disk tier pool shown in Figure 7-2 on page 352, you can see the SSD MDisks presented from the SSD disk arrays.

350

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 07 Easytier AB - noSSD.fm

7.2.2 Disk tiers


It is likely that the MDisks (LUNs) presented to the SVC cluster will have different performance attributes because of the type of disk or RAID array that they reside on. The MDisks can be on 15 K RPM Fibre Channel or SAS disk, Nearline SAS or SATA, or even solid state disks (SSDs). Thus, a storage tier attribute is assigned to each MDisk. The default is generic_hdd. Starting with SVC 6.1, a new disk tier attribute is available for SSDs known as generic_ssd. Note that the SVC does not automatically detect SSD MDisks. Instead, all external MDisks are initially put into the generic_hdd tier by default. Then the administrator has to manually change the SSD tier to generic_ssd by using the CLI or GUI.

7.2.3 Single tier storage pools


Figure 7-1 shows a scenario in which a single storage pool will be populated with MDisks presented by an external storage controller. In this solution the striped or mirrored volume can be measured by Easy Tier, but no action to optimize the performance will occur.

Figure 7-1 Single tier storage pool with striped volume

MDisks that are used in a single tier storage pool should have the same hardware characteristics, for example, the same RAID type, RAID array size, disk type, and disk revolutions per minute (RPMs) and controller performance characteristics.

7.2.4 Multiple tier storage pools


A multiple tiered storage pool will have a mix of MDisks with more than one type of disk tier attribute, for example, a storage pool containing a mix of generic_hdd and generic_ssd MDisks.

Chapter 7. Easy Tier

351

7933 07 Easytier AB - noSSD.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 7-2 shows a scenario in which a storage pool is populated with two different MDisk types: one belonging to an SSD array, and one belonging to an HDD array. Although this example shows RAID5 arrays, other RAID types can be used.

Figure 7-2 Multitier storage pool with striped volume

Adding SSD to the pool means additional space is also now available for new volumes, or volume expansion.

7.2.5 Easy Tier process


The Easy Tier function has four main processes: IOM (I/O Monitor), DPA (Data Placement Advisor), DMP (Data Migration Planner) and the DM(Data Migrator); as shown in Figure 7-3. These processes make sure that the extent allocation in multitiered storage pools is optimized for the best performance as monitored on your workload. At 5 minute intervals, statistics about extent utilization are collected. Every 24 hours elapsed time, a heatmap is created which is used to generate a migration plan and a summary report. This migration plan contains information about which extents to promote to the upper tier or to demote to the lower tier and the summary report will be used by STAT (see 7.4.1, Measuring by using the Storage Advisor Tool on page 357). Also, Easy Tiering is based on an algorithm with a threshold to evaluate if an extent is cold or hot. This means that if an extent activity is bellow this threshold, it will not be considered by the internal algorithm to be moved to the SSD tier. So, it is possible, but not likely, that the SSD tier will not be fully used if the Easy Tier cost/benefit algorithm determines that further extents should not be promoted.

352

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 07 Easytier AB - noSSD.fm

Figure 7-3 Easy tier process flow

The flow between this processes is as follows: I/O Monitoring This process operates continuously and monitors volumes for host I/O activity. It collects performance statistics for each extent and derives a rolling average for the I/O activity. Easy Tier makes allowances for large block I/Os and thus only considers I/Os of up to 64 KB as migration candidates. This is an efficient process and adds negligible processing overheads to the SVC nodes. Data Placement Advisor The Data Placement Advisor uses workload statistics to make a cost benefit decision as to which extents are to be candidates for migration to a higher performance (SSD) tier. This process also identifies extents that need to be migrated back to a lower (HDD) tier. Data Migration Planner Using the extents previously identified, the Data Migration Planner step builds the extent migration plan for the storage pool. Data Migrator The Data Migrator step involves scheduling and the actual movement or migration of the volumes extents up to, or down from the high disk tier. The extent migration rate is capped so that a maximum of up to 15MBps is migrated. This equates to around 2TB a day that will be migrated between disk tiers. When relocating volume extents, Easy Tier performs these actions: It attempts to migrate the most active volume extents up to SSD first. To ensure there is a free extent available, a less frequently accessed extent may first need to be migrated back to HDD. A previous migration plan and any queued extents that are not yet relocated are abandoned.

7.2.6 Easy Tier operating modes


There are three main operating modes for Easy Tier: Off mode, Evaluation or measurement only mode, and Automatic Data Placement or extent migration mode.

Easy Tier - Off mode


With Easy Tier turned off, there are no statistics recorded and no extent migration.

Evaluation or measurement only mode


Easy Tier Evaluation or measurement only mode collects usage statistics for each extent in a single tier storage pool where the Easy Tier value is set to on for both the volume and the

Chapter 7. Easy Tier

353

7933 07 Easytier AB - noSSD.fm

Draft Document for Review January 17, 2012 6:10 am

pool. This is typically done for a single tier pool containing only HDDs, so that the benefits of adding SSDs to the pool can be evaluated prior to any major hardware acquisition. A statistics summary file is created in the /dumps directory of the SVC nodes named dpa_heat.nodeid.yymmdd.hhmmss.data. This file can be offloaded from the SVC nodes with PSCP -load or using the GUI as shown in 7.4.1, Measuring by using the Storage Advisor Tool on page 357. A web browser is used to view the report created by the tool.

Auto Data Placement or extent migration mode


In Auto Data Placement or extent migration operating mode, the storage pool parameter -easytier on or auto must be set and the volumes in the pool will have -easytier on. The storage pool must also contain MDisks with different disk tiers; thus a multitiered storage pool. Dynamic data movement is transparent to the host server and application users of the data, other than providing improved performance. Extents will automatically be migrated according to 7.3.2, Implementation rules on page 355. The statistic summary file is also created in this mode. This file can be offloaded for input to the advisor tool. The tool will produce a report on the extents moved to SSD and a prediction of performance improvement that can be gained if more SSD arrays were available.

7.2.7 Easy Tier activation


To activate Easy Tier, set the Easy Tier value on the pool and volumes as shown in Table 7-1. The defaults are set in favor of Easy Tier. For example, if you create a new storage pool the -easytier value is auto. If you create a new volume, the value is on.
Table 7-1 Easy Tier states Storage pool Single tier or multitier storage pool Single Single Multi Multi Single Single Multi Multi Single Single Multi Multi Volume copy Easy Tier setting Off On Off On Off On Off On Off On Off On Easy Tier status

Off Off Off Off Auto (see note 5) Auto (see note 5) Auto (see note 5) Auto (see note 5) On On On On

Inactive (see note 2) Inactive (see note 2) Inactive (see note 2) Inactive (see note 2) Inactive (see note 2) Inactive (see note 2) Measured (see note 3) Active (see note 1) Measured (see note 3) Measured (see note 3) Measured (see note 3) Active (see note 1)

Notes:

354

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 07 Easytier AB - noSSD.fm

1. If the volume copy is in image or sequential mode or is being migrated then the volume copy Easy Tier status will be measured instead of active. 2. When the volume copy status is inactive, no Easy Tier functions are enabled for that volume copy. 3. When the volume copy status is measured, the Easy Tier function collects usage statistics for the volume but automatic data placement is not active. 4. When the volume copy status is active, the Easy Tier function operates in automatic data placement mode for that volume. 5. The default Easy Tier setting for a storage pool is auto, and the default Easy Tier setting for a volume copy is on. This means that Easy Tier functions will be disabled for storage pools with a single tier, and that automatic data placement mode will be enabled for all striped volume copies in a storage pool with two tiers. Examples of the use of these parameters are shown in 7.6, Using Easy Tier with the SVC CLI on page 365 and 7.7, Using Easy Tier with the SVC GUI on page 369.

7.3 Easy Tier implementation considerations


In this section we describe considerations to keep in mind before implementing Easy Tier.

7.3.1 Prerequisites
There is no Easy Tier license required for the SVC; it comes as a standard feature. For Easy Tier to migrate extents you will need to have disk storage available that has different tiers, for example a mix of SSD and HDD.

7.3.2 Implementation rules


Keep the following implementation and operation rules in mind when you use the IBM System Storage Easy Tier function on the SAN Volume Controller. Easy Tier automatic data placement is not supported on image mode or sequential volumes. I/O monitoring for such volumes is supported, but you cannot migrate extents on such volumes unless you convert image or sequential volume copies to striped volumes. Easy Tier is available for thin provisioned volumes, copy services and volume mirroring. Automatic data placement and extent I/O activity monitors are supported on each copy of a mirrored volume. Easy Tier works with each copy independently of the other copy and is transparent to Flash Copy and Remote Copy operations. For thin provisioned volumes only the real storage will be subject to management. Note: Volume mirroring can have different workload characteristics on each copy of the data because reads are normally directed to the primary copy and writes occur to both. Thus, the number of extents that Easy Tier will migrate to SSD tier will probably be different for each copy. If possible, the SAN Volume Controller creates new volumes or volume expansions using extents from MDisks from the HDD tier. However, it will use extents from MDisks from the SSD tier if necessary.
Chapter 7. Easy Tier

355

7933 07 Easytier AB - noSSD.fm

Draft Document for Review January 17, 2012 6:10 am

When a volume is migrated out of a storage pool that is managed with Easy Tier, then Easy Tier automatic data placement mode is no longer active on that volume. Automatic data placement is also turned off while a volume is being migrated even if it is between pools that both have Easy Tier automatic data placement enabled. Automatic data placement for the volume is re-enabled when the migration is complete.

7.3.3 Limitations
Limitations exist when using IBM System Storage Easy Tier on the SAN Volume Controller. Limitations when removing an MDisk by using the -force parameter When an MDisk is deleted from a storage pool with the -force parameter, extents in use are migrated to MDisks in the same tier as the MDisk being removed, if possible. If insufficient extents exist in that tier, then extents from the other tier are used. Limitations when migrating extents When Easy Tier automatic data placement is enabled for a volume, the svctask migrateexts command-line interface (CLI) command cannot be used on that volume. Limitations when migrating a volume to another storage pool When the SAN Volume Controller migrates a volume to a new storage pool, Easy Tier automatic data placement between the two tiers is temporarily suspended. After the volume is migrated to its new storage pool, Easy Tier automatic data placement between the generic SSD tier and the generic HDD tier resumes for the moved volume, if appropriate. When the SAN Volume Controller migrates a volume from one storage pool to another, it will attempt to migrate each extent to an extent in the new storage pool from the same tier as the original extent. In several cases, such as a target tier being unavailable, the other tier is used. For example, the generic SSD tier might be unavailable in the new storage pool. Limitations when migrating a volume to image mode Easy Tier automatic data placement does not support image mode. When a volume with Easy Tier automatic data placement mode active is migrated to image mode, Easy Tier automatic data placement mode is no longer active on that volume. Image mode and sequential volumes cannot be candidates for automatic data placement. Easy Tier does support evaluation mode for image mode volumes.

Best practices
Always set the Storage Pool -easytier value to on rather than to the default value auto. This makes it easier to turn on evaluation mode for existing single tier pools, and no further changes will be needed when you move to multitier pools. See Easy Tier activation on page 354 for more information about the mix of pool and volume settings. Using Easy Tier can make it more appropriate to use smaller storage pool extent sizes.

7.4 Measuring and activating Easy Tier


In the following sections we describe how to measure using Easy Tier and how to activate it.

356

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 07 Easytier AB - noSSD.fm

7.4.1 Measuring by using the Storage Advisor Tool


The IBM Storage Advisor Tool is a command-line tool that runs on Windows systems. It takes input from the dpa_heat files created on the SVC nodes and produces a set of html files containing activity reports. The advisor tool is an application that creates a Hypertext Markup Language (HTML) file containing a report. For more information, visit the following website: http://www-01.ibm.com/support/docview.wss?uid=ssg1S4000935 Contact your IBM Representative or IBM Business Partner for further detail about the IBM Storage Advisor Tool.

Offloading statistics
To extract the summary performance data, use one of these methods:

Using the command-line interface (CLI)


Find the most recent dpa_heat.node_name.date.time.data file in the cluster by entering the following CLI command: lsdumps node_id | node_name where node_id | node_name is the node ID or name to list the available dpa_heat data files. Next, perform the normal PSCP -load download process: pscp -unsafe -load saved_putty_configuration admin@cluster_ip_address:/dumps/dpa_heat.node_name.date.time.data your_local_directory

Using the GUI


If you prefer using the GUI, then navigate to the Settings Support page, next click on Show full log listing. This will list al the log files available to download. Finally Right click on the /dump/dpa_heat.*.data file and select Download. As shown in Figure 7-4.

Chapter 7. Easy Tier

357

7933 07 Easytier AB - noSSD.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 7-4 dpa_heat file download

Running the tool


You can run the tool from a command line or terminal session by specifying up to two input dpa_heat file names and directory paths; for example: C:\Program Files\IBM\STAT>STAT dpa_heat.nodenumber.yymmdd.hhmmss.data A file called index.html is then created in the STAT base directory. When opened with your browser, it will display a summary page as shown in Figure 7-5.

Figure 7-5 Example of STAT Summary

The distribution of hot data and cold data for each volume is shown in the volume heat distribution report. The report displays the portion of the capacity of each volume on SSD (red), and HDD (blue), as shown in Figure 7-6.

358

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 07 Easytier AB - noSSD.fm

Figure 7-6 STAT Volume Heatmap Distribution sample

7.5 SSD implementation and configuration


In this section we describe SSD drive configuration using the GUI. When considering SSD implementation in your environment remember that measuring your I/O activity is important in order to get a more accurate picture of your environment. This will lead to a better cost/benefit solution since SSD implementations can be expensive. Since I/O varies a rule of thumb to follow when considering the addition of SSD drives is that, 80% of I/O is to 20% of data thus moving that 20% to SSDs would likely improve performance. Remember that internal SSDs can be configured in the following two RAID levels: RAID-1/10: In this configuration one half of the mirror will be in each node of the I/O group providing redundancy in case of a node failure. RAID-0: In this configuration all the drives will be assigned to the same node. This configuration is intended to be used with VDisk Mirroring since no redundancy is provided in case of a node failure. In our configuration we have four 136.2 GB SSDs on two different nodes configured in one I/O group. Navigate to Pools Internal Storage, the solid-states drives are candidates and belong to the I/O group io_grp1 as shown in Figure 7-7.

Figure 7-7 Internal Storage

Click on Configure Storage and the Configure Internal Storage window will appear (Figure 7-8).

Chapter 7. Easy Tier

359

7933 07 Easytier AB - noSSD.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 7-8 Configure Internal Storage window

This window shows the four available drives. The next step is to select the configuration preset from the three options available. In the following sections we will show each of this configurations and the difference between each one.

7.5.1 Mirrored configuration


This configuration will create one RAID-10 array with all the drives selected. As shown in Figure 7-9, we select Mirrored as the configuration preset provisioning the four drives.

Figure 7-9 Mirrored preset

Next the storage pool configuration has to be selected. For this example we will select to expand an existing storage pool: ssd_pool, as shown in Figure 7-10 on page 361.

360

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 07 Easytier AB - noSSD.fm

Figure 7-10 Storage Pool selection

Finally, click Finish to apply the configuration. The resulting output is shown (Figure 7-11).

Figure 7-11 Configuration output

From the Pools Mdisks by Pools menu you can see the newly configured MDisk, mdisk12 in our example, added to the storage pool we selected previously, ssd_pool. Right clicking on the MDisk shows the RAID level configured for the selected preset (Figure 7-12).

Figure 7-12 RAID level for the created MDisk

Chapter 7. Easy Tier

361

7933 07 Easytier AB - noSSD.fm

Draft Document for Review January 17, 2012 6:10 am

7.5.2 Easy Tier


The Easy Tier configuration preset will create RAID-1 arrays with the drives distributed across the I/O group nodes providing this way node redundancy. Figure 7-13 shows the selected preset.

Figure 7-13 Easy Tier preset

Next we again select to expand the existing ssd_pool Storage Pool. We will skip the Storage Pool selection window and jump straight to the resulting output shown in Figure 7-14.

Figure 7-14 Configuration output

Finally, from the Pools Mdisks by Pools menu the two newly created MDisks (mdisk10 and mdisk13) will be displayed and, as shown in Figure 7-15 on page 363, each one is configured as a RAID-1 array.

362

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 07 Easytier AB - noSSD.fm

Figure 7-15 RAID level of the MDisks created

From the Member Drives tab you will see that each MDisk is configured with drives that belong to different nodes from the same I/O group, as shown in Figure 7-16, providing node redundancy. Refer to Figure 7-7 on page 359 to see which node owns which drives.

Figure 7-16 Member drives for the selected mdisk

7.5.3 Striped
Finally, we show the Striped configuration preset (Figure 7-17 on page 364).This preset will create RAID-0 arrays with drives from the same node providing no redundancy in case of a node failure. Since in our configuration we have the SSDs spread across two SVC nodes, two different MDisks will be created.

Chapter 7. Easy Tier

363

7933 07 Easytier AB - noSSD.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 7-17 Striped configuration preset

We skip the Storage Pool selection window and show the resulting output in Figure 7-18.

Figure 7-18 Configuration output

From the Pools Mdisks by Pools menu the two newly created mdisks (mdisk10 and mdisk12) will be displayed. Each mdisk is a RAID-0 array as shown in Figure 7-19 on page 364.

Figure 7-19 RAID level of the created mdisks

364

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 07 Easytier AB - noSSD.fm

7.6 Using Easy Tier with the SVC CLI


This section describes the basic steps for activating Easy Tier by using the SVC command line interface (CLI). Our example is based on the storage pool configurations as shown in Figure 7-1 on page 351 and Figure 7-2 on page 352. Our environment is an SVC cluster with the following resources available: 1 x I/O group with two 2145-8G4 nodes 1 x external Storage Subsystem with SSDs 1 x external Storage Subsystem with HDDs Deleted lines: Many non-Easy Tier-related lines have been deleted in the command output or responses in the examples shown in the following sections to enable you to focus on Easy Tier-related information only.

7.6.1 Initial cluster status


Example 7-1 displays the SVC cluster characteristics prior to adding multitiered storage (SSD with HDD) and commencing the Easy Tier process. The example shows two different tiers available in our SVC cluster, generic_ssd and generic_hdd. At this time there is zero disk allocated to the generic_ssd tier, and therefore it is showing 0.00 MB capacity.
Example 7-1 SVC cluster IBM_2145:ITSO_SVC3:superuser>svcinfo lscluster id name location partnership bandwidth id_alias 0000020064403A38 ITSO_SVC3 local 0000020064403A38 IBM_2145:ITSO_SVC3:superuser>svcinfo lscluster 0000020064403A38 id 0000020064403A38 name ITSO_SVC3 ... tier generic_ssd tier_capacity 0.00MB tier_free_capacity 0.00MB tier generic_hdd tier_capacity 767.50GB tier_free_capacity 726.75GB

7.6.2 Turning on Easy Tier evaluation mode


Figure 7-1 on page 351 shows an existing single tier storage pool. To turn on Easy Tier evaluation mode, we need to set -easytier on for both the storage pool and the volumes in the pool. Refer to Table 7-1 on page 354 to check the required mix of parameters needed to set the volume Easy Tier status to measured. As shown in Example 7-2, we turn Easy Tier on for both the pool and volume so that the extent workload measurement is enabled. We first check and then change the pool. Then we repeat the steps for the volume.
Example 7-2 Turning on Easy Tier evaluation mode IBM_2145:ITSO_SVC3:superuser>lsmdiskgrp -filtervalue "name=Single*" id name status mdisk_count vdisk_count ... easy_tier easy_tier_status 1 Single_Tier_Pool online 3 3 ... off inactive

Chapter 7. Easy Tier

365

7933 07 Easytier AB - noSSD.fm

Draft Document for Review January 17, 2012 6:10 am

IBM_2145:ITSO_SVC3:superuser>lsmdiskgrp Single_Tier_Pool id 1 name Single_Tier_Pool status online mdisk_count 3 vdisk_count 3 ... easy_tier off easy_tier_status inactive ... tier generic_ssd tier_mdisk_count 0 tier_capacity 0.00MB ... tier generic_hdd tier_mdisk_count 3 tier_capacity 383.00GB IBM_2145:ITSO_SVC3:superuser>chmdiskgrp -easytier on Single_Tier_Pool IBM_2145:ITSO_SVC3:superuser>lsmdiskgrp Single_Tier_Pool id 1 name Single_Tier_Pool status online mdisk_count 3 vdisk_count 3 ... easy_tier on easy_tier_status active tier generic_ssd tier_mdisk_count 0 tier_capacity 0.00MB ... tier generic_hdd tier_mdisk_count 3 tier_capacity 383.00GB

------------ Now Reapeat for the Volume ------------IBM_2145:ITSO_SVC3:superuser>lsvdisk -filtervalue "mdisk_grp_name=Single*" id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type 2 vdisk0 0 io_grp0 online 1 Single_Tier_Pool 10.00GB striped IBM_2145:ITSO_SVC3:superuser>lsvdisk vdisk0 id 2 name vdisk0 ... easy_tier off easy_tier_status inactive tier generic_ssd tier_capacity 0.00MB tier generic_hdd tier_capacity 10.00GB ... IBM_2145:ITSO_SVC3:superuser>chvdisk -easytier on vdisk0 IBM_2145:ITSO_SVC3:superuser>lsvdisk vdisk0

366

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 07 Easytier AB - noSSD.fm

id 2 name vdisk0 ... easy_tier on easy_tier_status measured tier generic_ssd tier_capacity 0.00MB tier generic_hdd tier_capacity 10.00GB

7.6.3 Creating a multitier storage pool


With the SSD drive candidates placed into an array, we now need a pool into which the two tiers of disk storage will be placed. If you already have an HDD single tier pool, a traditional pre-SVC V6.1 pool, then all you will need to know is the existing MDiskgrp ID or name. In this example we have a storage pool available within which we want to place our SSD arrays, Multi_Tier_Pool. After creating the SSD arrays, which appear as MDisks, they are placed into the storage pool as shown in Example 7-3. Note that the storage pool easy_tier value is set to auto because it is the default value assigned when you create a new storage pool. Also note that the SSD MDisks default tier value is set to generic_hdd, and not to generic_ssd.
Example 7-3 Multitier pool creation IBM_2145:ITSO_SVC3:superuser>lsmdiskgrp -filtervalue "name=Multi*" id name status mdisk_count vdisk_count easy_tier easy_tier_status 2 Multi_Tier_Pool online 3 1 off inactive IBM_2145:ITSO_SVC3:superuser>lsmdiskgrp Multi_Tier_Pool id 2 name Multi_Tier_Pool status online mdisk_count 3 vdisk_count 1 ... easy_tier off easy_tier_status inactive tier generic_ssd tier_mdisk_count 0 ... tier generic_hdd tier_mdisk_count 3 IBM_2145:ITSO_SVC3:superuser>lsmdisk id name status mode mdisk_grp_id 5 mdisk5 online managed 2 6 mdisk6 online managed 2 7 ssd_mdisk0 online managed 2

mdisk_grp_name Multi_Tier_Pool Multi_Tier_Pool Multi_Tier_Pool

capacity 128.0GB 128.0GB 128.0GB

tier generic_hdd generic_hdd generic_hdd

IBM_2145:ITSOSVC3:superuser>lsmdisk ssd_mdisk0 id 6 name ssd_mdisk0 status online mode managed mdisk_grp_id mdisk_grp_name Multi_Tier_Pool capacity 128.0GB

Chapter 7. Easy Tier

367

7933 07 Easytier AB - noSSD.fm

Draft Document for Review January 17, 2012 6:10 am

... tier generic_hdd

7.6.4 Setting the disk tier


As shown in Example 7-3 on page 367, as MDisks detected they are assigned the default disk tier of generic_hdd. Easy Tier is also still inactive for the storage pool because we do not yet have a true multidisk tier pool. To activate the pool we have to reset the SSD MDisks to their correct generic_ssd tier as shown in Example 7-4 where we modify the SSD disk tier.
Example 7-4 Changing an SSD disk tier to generic_ssd IBM_2145:ITSOSVC3:superuser>lsmdisk ssd_mdisk0 id 6 name ssd_mdisk0 ... tier generic_hdd IBM_2145:ITSO_SVC3:superuser>chmdisk -tier generic_ssd ssd_mdisk0 IBM_2145:ITSO_SVC3:superuser>lsmdisk ssd_mdisk0 id 6 name ssd_mdisk0 status online ... tier generic_ssd IBM_2145:ITSO_SVC3:superuser>lsmdiskgrp Multi_Tier_Pool id 1 name Multi_Tier_Pool status online mdisk_count 3 vdisk_count 1 ... easy_tier auto easy_tier_status active tier generic_ssd tier_mdisk_count 1 ... tier generic_hdd tier_mdisk_count 2 ...

7.6.5 Checking a volumes Easy Tier mode


To check the Easy Tier operating mode on a volume, we need to display its properties using the lsvdisk command. An automatic data placement mode volume will have its pool value set to ON or AUTO, and the volume set to ON. The CLI volume easy_tier_status will be displayed as active, as shown in Example 7-5. An evaluation mode volume will have both the pool and volume value set to ON. However, the CLI volume easy_tier_status will be shown as measured, as seen in Example 7-2 on page 365.
Example 7-5 Checking a volume easy_tier_status IBM_2145:ITSO_SVC3:superuser>lsvdisk vdisk0 id 1

368

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 07 Easytier AB - noSSD.fm

name vdisk0 ... mdisk_grp_name Multi_Tier_Pool capacity 10.00GB type striped ... easy_tier on easy_tier_status active . tier generic_ssd tier_capacity 0.00MB tier generic_hdd tier_capacity 10.00GB

The volume in the example will be measured by Easy Tier and a hot extent migration will be performed from the hdd tier MDisk to the ssd tier MDisk. Also note that the volume hdd tier generic_hdd still holds the entire capacity of the volume because the generic_ssd capacity value is 0.00 MB. The allocated capacity on the generic_hdd tier will gradually change as Easy Tier optimizes the performance by moving extents into the generic_ssd tier.

7.6.6 Final cluster status


Example 7-6 shows the SVC cluster characteristics after adding multitiered storage (SSD with HDD).
Example 7-6 SVC Multi-Tier cluster IBM_2145:ITSO_SVC3:superuser>svcinfo lscluster 0000020060A06FB8 id 0000020060A06FB8 name ITSO_SVC3 ... tier generic_ssd tier_capacity 128.00GB tier_free_capacity 124.50GB . tier generic_hdd tier_capacity 638.50GB tier_free_capacity 622.00G

As you can now see we have two different tiers available in our SVC cluster, generic_ssd and generic_hdd. At this time there are also extents being used on both the generic_ssd tier and the generic_hdd tier; see the free_capacity values. However, we do not know from this command if the SSD storage is being used by the Easy Tier process. To determine if Easy Tier is actively measuring or migrating extents within the cluster, you need to view the volume status as shown previously in Example 7-5 on page 368.

7.7 Using Easy Tier with the SVC GUI


This section describes the basic steps to activate Easy Tier by using the web interface or GUI. Our example is based on the storage pool configurations shown in Figure 7-1 on page 351 and Figure 7-2 on page 352.

Chapter 7. Easy Tier

369

7933 07 Easytier AB - noSSD.fm

Draft Document for Review January 17, 2012 6:10 am

Our environment is an SVC cluster with the following resources available: 1 x I/O group with two 2145-8G4 nodes 1 x external Storage Subsystem with SSDs 1 x external Storage Subsystem with HDDs

7.7.1 Setting the disk tier on MDisks


When displaying the storage pool you can see that Easy Tier is inactive on for the Multi_Tier_Pool (Figure 7-20), even though there are SSD MDisks in the pool as shown in Figure 7-21.

Figure 7-20 Multi_Tier_Pool details

Figure 7-21 Mdisks by pool

This is because, by default, all MDisks are initially discovered as Hard Disk Drives (HDDs); see the MDisk properties panel Figure 7-22 on page 371.

370

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 07 Easytier AB - noSSD.fm

Figure 7-22 MDisk default tier is Hard Disk Drive

Therefore, for Easy Tier to take effect, you need to change the disk tier. Right-click the selected MDisk and choose Select Tier, as shown in Figure 7-23.

Figure 7-23 Select the Tier

Now set the MDisk Tier to Solid-State Drive, as shown in Figure 7-24 on page 371.

Figure 7-24 GUI Setting Solid-State Drive tier

Chapter 7. Easy Tier

371

7933 07 Easytier AB - noSSD.fm

Draft Document for Review January 17, 2012 6:10 am

Click Close after reviewing that the Task completed successfully. The MDisk now has the correct tier and so the properties value is correct for a multidisk tier pool, as shown in Figure 7-24.

Figure 7-25 MDisk tier is Solid State Drive

7.7.2 Checking Easy Tier status


Now that the SSDs are known to the pool as Solid-State Drives, the Easy Tier function becomes active automatically as shown in Figure 7-26 on page 372. After the pool has an Easy Tier active status, the automatic data relocation process begins for the volumes in the pool. This occurs as the default Easy Tier setting for volumes is ON. Also look at the highlighted icon for the Storage Pools with Easy Tier active.

Figure 7-26 Storage Pool with Easy Tier active

372

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

Chapter 8.

Advanced Copy Services


Before proceeding in this chapter, review the content of the Advanced Copy Services Overview in 2.7, Advanced Copy Services overview on page 35, where we first describe these functions at a high-level view. In this chapter we will discuss in detail the Advanced Copy Services functions available for the IBM System Storage SAN Volume Controller (SVC.). In Chapter 9, SAN Volume Controller operations using the command-line interface on page 467, we explain how to use the command-line interface and Advanced Copy Services. In Chapter 10, SAN Volume Controller operations using the GUI on page 631, we explain how to use the GUI and Advanced Copy Services.

Copyright IBM Corp. 2011. All rights reserved.

373

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

8.1 FlashCopy
The FlashCopy function of the IBM System Storage SAN Volume Controller (SVC) provides the capability to perform a point-in-time copy of one or more volumes. In this section we describe the inner workings of FlashCopy and provide details of its configuration and use. You can use FlashCopy to help you solve critical and challenging business needs that require duplication of data of your source volume. Volumes may remain online and active while you create consistent copies of the data sets. Because the copy is performed at the block level, it operates below the host operating system and cache and is therefore transparent to the host. Note: Because FlashCopy operates at the block level, below the host operating system and cache, those levels do need to be flushed for consistent FlashCopies. While the FlashCopy operation is performed, the source volume is frozen briefly to initialize the FlashCopy bitmap and then I/O is allowed to resume. Although several FlashCopy options require the data to be copied from the source to the target in the background, which can take a length of time to complete, the resulting data on the target volume is presented so that the copy appears to have completed immediately. This is done through the use of a bitmap (or bit array) which keeps track of changes to the data after the FlashCopy is initiated and an indirection layer, which allows data to be read from the source volume transparently.

8.1.1 Business Requirements for FlashCopy


When deciding if FlashCopy will address your challenges, you need to adopt a combined business and technical view of the problem(s) you need to solve. First, determine what the needs are from a business perspective. Then determine the if FlashCopy will fill the technical needs of those business requirements. The business applications for FlashCopy are wide ranging. Some common use cases for FlashCopy include, but are not limited to: Rapidly creating consistent backups of dynamically changing data Rapidly creating consistent copies of production data to facilitate data movement or migration between hosts Rapidly creating copies of production datasets for application development and testing Rapidly creating copies of production datasets for auditing purposes and data mining Rapidly creating copies of production datasets for quality assurance Regardless of your business needs, FlashCopy within the SVC is very flexible and has a broad feature set, making it applicable to many different scenarios.

8.1.2 Backup Improvements with FlashCopy


FlashCopy does not reduce the time it takes to perform a backup to traditional backup infrastructure. However, it can be used to minimize and, under certain conditions, eliminate application downtime associated with performing backups or transfer the resource consumption of performing intensive backups from production systems. After the FlashCopy is performed, the resulting image of the data can be backed up to tape, as if it were the source system. After the copy to tape has been completed, the image data is redundant and the target volumes can be discarded. For time-limited applications like this, no copy or incremental FlashCopy is most often leveraged. Using these methods, puts less load on your infrastructure. 374
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

Usually when FlashCopy is used for backup purposes, the target data is managed as read-only at the operating system level. This provides extra security by ensuring that your target data has not been modified and remains true to the source.

8.1.3 Restore with FlashCopy


FlashCopy has the ability to perform a restore from any existing FlashCopy mapping. This means that you can restore (or copy) from the target to the source (it might be easier this think of this as reversing the direction of the FlashCopy mapping(s).) of your regular FlashCopy relationship(s). This has several benefits: 1) No need to worry about pairing mistakes, you just trigger a restore. 2) It will appear instantaneous. 3) You can maintain a pristine image of your data while restoring what was the primary data. This can be used for a variety of applications, such as recovering your production database application after an errant batch process caused extensive damage. Note: While restoring from a FlashCopy is several orders of magnitude quicker than a traditional tape media restore, you should not use this as a substitute for good archiving practices. Instead, keep one to several iterations of your FlashCopies so you can near-instantly recover your data from the most recent history and keep your long-term archive as appropriate for your business. In addition to the restore option, which copies the entire target volume to the source volume, the target can be used to preform a restore of individual files. Simply make it available on a host (recommended to not be the source as seeing doubles of disks causes problems for most host operating systems) and copying the files to the source via normal host data copy methods for your environment.

8.1.4 Moving and migrating data with FlashCopy


FlashCopy can be used to facilitate the movement or migration of data between hosts while minimizing downtime for applications. FlashCopy will allow application data to be copied from source volumes to new target volumes while applications remain online. After the volumes are fully copied and synchronized, the application can be brought down and then immediately brought back up on the new server accessing the new FlashCopy target volumes. This is different than the other migration methods, discussed later in this chapter. This method, using FlashCopy, is typically faster and more efficient from a labor perspective than the other methods. Common uses for this ability are host and back-end storage hardware refreshes.

8.1.5 Application testing with FlashCopy


It is often important to test a new version of an application or operating system using actual production data. This ensures the highest quality possible for your environment. FlashCopy makes this type of testing easy to accomplish without putting the production data at risk or requiring downtime to create a constant copy. You simply create a FlashCopy of your source and use that for your testing. This is a duplicate of your production data down the block level, so even physical disk identifiers will be copied; meaning its impossible for your application(s) to tell the difference.

Chapter 8. Advanced Copy Services

375

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

8.1.6 Host and Application considerations to ensure FlashCopy integrity


Because FlashCopy is at the block level, it is necessary to understand the interaction between your application(s) and host operating system. From a logical standpoint, it is perhaps easiest to think of these as layers which sit atop one another. Application would be the topmost, and below it is the Operating System layer. Both of these layers have various levels and methods of caching data to provide better speed. Since the SVC and thus FlashCopy sit below these layers, they are not aware of the cache at the Application or Operating System layers. In order to ensure the integrity of the copy that is made, it is necessary to flush the host operating system and application cache for any outstanding reads or writes prior to performing the FlashCopy operation. Failing to do so will produce what is referred to as a crash consistent copy, meaning the resulting copy will require the same type of recovery procedure (such as log replay and filesystem checks) as is required following a host crash. FlashCopies that are crash consistent are usually able to be used following file system and application recovery procedures. Various operating systems and applications provide facilities to stop I/O operations and ensure all data is flushed from host cache. If these facilities are available, they can be used to prepare before starting a FlashCopy operation. When this type of facility is not available, then the host cache must be flushed manually by quiescing the application and unmounting the filesystem or drives. Note: From a practical standpoint, when you have an application which is backed by a database and wish to make a FlashCopy of that applications data, it is sufficient to in most cases to use the write-suspend method thats available in most modern databases, as the database maintains strict control over I/O. This is opposed to flushing data from both the application and the backing database, which is always the recommended method, as its safer, but this method can be used when facilities dont exist or your environment has time sensitivity.

8.1.7 FlashCopy attributes


The FlashCopy function in SVC possesses the following attributes: The target is the time-zero copy of the source (known as FlashCopy mapping targets). FlashCopy produces an exact copy of the source volume, including any metadata that was written by the host operating system, logical volume manager, and applications. The source volume and target volume are available (almost) immediately following the FlashCopy operation. The source and target volumes must be the same virtual size. The source and target volumes must be on the same SVC cluster. The source and target volumes do not need to be in the same I/O group or storage pool. The storage pool extent sizes can be different between the source and target. The source volumes can have up to 256 target volumes (Multiple Target FlashCopy). The target volumes can be the source volumes for other FlashCopy relationships (Cascaded FlashCopy). Consistency Groups are supported to enable FlashCopy across multiple volumes. Up to 127 Consistency Groups are supported for FlashCopy. The target volume can be updated independently of the source volume.

376

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

Bitmaps governing I/O redirection (I/O indirection layer) are maintained in both nodes of the SVC I/O Group to prevent a single point of failure. FlashCopy mapping and Consistency Groups can be automatically withdrawn after the completion of the background copy. Thin-provisioned FlashCopy will only consume disk space when updates are made to the source or target data and not for the entire capacity of a volume copy. FlashCopy licensing is based on the virtual capacity of the source volumes. Incremental FlashCopy copies all of the data for the first FlashCopy and then only the changes for all subsequent FlashCopy. Incremental FlashCopy can substantially reduce the time required to recreate an independent image. Reverse FlashCopy enables FlashCopy targets to become restore points for the source without breaking the FlashCopy relationship and without having to wait for the original copy operation to complete. The maximum number of supported FlashCopy mappings is 8192 per SVC cluster. The size of the source and target volumes cannot be altered (increased or decreased) while a FlashCopy mapping is defined.

8.2 Reverse FlashCopy


Reverse FlashCopy enables FlashCopy targets to become restore points for the source without breaking the FlashCopy relationship and without having to wait for the original copy operation to complete. It supports multiple targets (up to 256) and thus multiple rollback points. A key advantage of SVC Multiple Target Reverse FlashCopy function is that the reverse FlashCopy does not destroy the original target, thus allowing processes using the target, such as a tape backup, to continue uninterrupted. SVC also provides the ability to create an optional copy of the source volume to be made prior to starting the reverse copy operation. This ability to restore back to the original source data can be useful for diagnostic purposes. The steps required to restore from an on-disk backup are listed here: 1. (Optional) Create a new target volume (volume Z) and use FlashCopy to copy the production volume (volume X) onto the new target for later problem analysis. 2. Create a new FlashCopy map with the backup to be restored (volume Y) or (volume W) as the source volume and volume X as the target volume, if this map does not already exist. 3. Start the FlashCopy map (volume Y volume X) with the -restore option to copy the backup data onto the production disk. If the -restore option is specified and no FlashCopy mapping exists, the command is ignored, preserving your data integrity. 4. The production disk is instantly available with the backup data. Figure 8-1 on page 378 shows an example of Reverse FlashCopy.

Chapter 8. Advanced Copy Services

377

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 8-1 Reverse FlashCopy

Note that regardless of whether the initial FlashCopy map (volume X volume Y) is incremental, the Reverse FlashCopy operation only copies the modified data. Consistency Groups are reversed by creating a set of new reverse FlashCopy maps and adding them to a new reverse Consistency Group. Consistency Groups cannot contain more than one FlashCopy map with the same target volume.

8.2.1 FlashCopy and Tivoli Storage FlashCopy Manager


The management of many large FlashCopy relationships and Consistency Groups is a complex task without a form of automation for assistance. IBM Tivoli Storage FlashCopy Manager provides fast application-aware backups and restores leveraging advanced point-in-time image technologies in the IBM SAN Volume Controller. In addition to this, it provides an optional integration with IBM Tivoli Storage Manager, for long-term storage of snapshots. Figure 8-2 on page 379 shows the integration of Tivoli Storage Manager and FlashCopy Manager from a conceptual level.

378

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

Figure 8-2 Tivoli Storage Manager for Advanced Copy Services features

Tivoli FlashCopy Manager provides many of the features of Tivoli Storage Manager for Advanced Copy Services without the requirement to use Tivoli Storage Manager. With Tivoli FlashCopy Manager, you can coordinate and automate host preparation steps before issuing FlashCopy start commands to ensure that a consistent backup of the application is made. You can put databases into hot backup mode and flush filesystem cache prior to starting the FlashCopy. FlashCopy Manager also allows for easier management of on-disk backups using FlashCopy, and provides a simple interface to perform the reverse operation. Figure 8-3 on page 380 shows the FlashCopy Manager feature.

Chapter 8. Advanced Copy Services

379

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 8-3 Tivoli Storage Manager FlashCopy Manager features

With IBM Tivoli FlashCopy Manager V3.1, released October 21, 2011 support for VMware vSphere was the major addition, as this leverages IBM FlashCopy to provide extremely quick and efficient backups of VMware environments. This release also integrates with IBM Tivoli Storage Manager for Virtual Environments, and which allows backup of point-in-time images into the Tivoli Storage Manager Infrastructure for long-term storage. The addition of VMWare vSphere brings support and application awareness for FlashCopy Manager up to the following list: 1. MMC Snapin and Base System Services for Microsoft Windows. 2. Microsoft Exchange 2007 & 2010. 3. VSS Requestor for Microsoft Windows. 4. IBM DB2 (with or without SAP) for AIX, Solaris SPARC, HP-UX (IA-64), and Linux x86_64. 5. Oracle for AIX, Solaris SPARC, HP-UX (IA-64), and Linux x86_64. 6. Oracle with SAP for AIX, Solaris SPARC, HP-UX (IA-64), and Linux x86_64. 7. Generic Backup Agent support for Custom Applications on AIX, Solaris SPARC, HP-UX (IA-64), and Linux x86_64. 8. VMware vSphere on Linux x86_64. If you would like to learn more about IBM Tivoli FlashCopy Manager, visit the following link, as describing IBM Tivoli FlashCopy Manager in detail is beyond the scope of this document: http://www-01.ibm.com/software/tivoli/products/storage-flashcopy-mgr/

380

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

8.3 FlashCopy functional overview


FlashCopy works by defining a FlashCopy mapping that consists of one source volume together with one target volume. Multiple FlashCopy mappings (source-to-target relationships) can be defined, and point-in-time consistency can be maintained across multiple individual mappings using Consistency Groups. See Consistency Group with Multiple Target FlashCopy on page 385 for more information about this topic. Before you start a FlashCopy (regardless of the type and options specified) you must issue a prestartfcmap or prestartfcconsistgrp which puts the SVC Cache into write-through mode, providing a flushing of the I/O currently bound for your volume. Once FlashCopy is started, an effective copy of a source volume to a target volume has been created. The content of the source volume is immediately presented on the target volume and the original content of the target volume is lost. This FlashCopy operation is also referred to as a time-zero copy (T0 ). Immediately following the FlashCopy operation, both the source and target volumes are available for use. The FlashCopy operation creates a bitmap that is referenced and maintained to direct I/O requests within the source and target relationship. This bitmap is updated to reflect the active block locations as data is copied in the background from the source to target and updates are made to the source. For more details about background copy, see 8.4.5, Grains and the FlashCopy bitmap on page 386. Figure 8-4 on page 381 illustrates the redirection of the host I/O toward the source volume and the target volume.

Figure 8-4 Redirection of host I/O

8.4 Implementing SVC FlashCopy


In the following section we describe how FlashCopy is implemented in the SVC.

Chapter 8. Advanced Copy Services

381

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

8.4.1 FlashCopy mappings


FlashCopy occurs between a source volume and a target volume. The source and target volumes must be the same size. The minimum granularity that SVC supports for FlashCopy is an entire volume; it is not possible to use FlashCopy to copy only part of a volume. Note: As with any point-in-time copy technology, you will be bound by Operating System and Application requirements for interdependent data, as well as the restriction to an entire volume. The source and target volumes must belong to the same SVC cluster, but they do not have to be in the same I/O Group or storage pool. FlashCopy associates a source volume to a target volume through FlashCopy mapping. To become members of a FlashCopy mapping, source and target volumes must be the same size. Volumes that are members of a FlashCopy mapping cannot have their size increased or decreased while they are members of the FlashCopy mapping. A FlashCopy mapping is the act of creating a relationship between a source volume and a target volume. FlashCopy mappings can be either stand-alone or a member of a Consistency Group. You can perform the actions of preparing, starting, or stopping FlashCopy on either a stand-alone mapping or a Consistency Group. Figure 8-5 illustrates the concept of FlashCopy mapping.

Figure 8-5 FlashCopy mapping

8.4.2 Multiple Target FlashCopy


SVC supports up to 256 target volumes from a single source volume. Each copy is managed by a unique mapping. In general, each mapping acts independently and is not affected by other mappings sharing the same source volume. Figure 8-6 illustrates the Multiple Target FlashCopy implementation. Note: For independence of FlashCopy mappings, prior copies must be complete.

382

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

Figure 8-6 Multiple Target FlashCopy implementation

Figure 8-6 shows four targets and mappings taken from a single source, along with their interdependencies. In this example Target 1 is the oldest (as measured from the time it was started) through to Target 4, which is the newest. The ordering is important because of the way in which data is copied when multiple target volumes are defined and because of the dependency chain that results. A write to the source volume does not cause its data to be copied to all of the targets. Instead, it is copied to the newest target volume only (Target 4 in Figure 8-6). The older targets will refer to new targets first before referring to the source. From the point of view of an intermediate target disk (neither the oldest or the newest), it treats the set of newer target volumes and the true source volume as a type of composite source. It treats all older volumes as a kind of target (and behaves like a source to them). If the mapping for an intermediate target volume shows 100% progress, its target volume contains a complete set of data. In this case, mappings treat the set of newer target volumes, up to and including the 100% progress target, as a form of composite source. A dependency relationship exists between a particular target and all newer targets (up to and including a target that shows 100% progress) that share the same source until all data has been copied to this target and all older targets. You can read more about Multiple Target FlashCopy in 8.4.6, Interaction and dependency between Multiple Target FlashCopy mappings on page 387.

8.4.3 Consistency Groups


Consistency Groups address the requirement to preserve point-in-time data consistency across multiple volumes for applications having related data that spans multiple volumes. For these volumes, Consistency Groups maintain the integrity of the FlashCopy by ensuring that dependent writes are executed in the applications intended sequence. When Consistency Groups are used, the FlashCopy commands are issued to the FlashCopy Consistency Group, which performs the operation on all FlashCopy mappings contained within the Consistency Group at the same time. Figure 8-7 illustrates a Consistency Group consisting of two FlashCopy mappings.

Chapter 8. Advanced Copy Services

383

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 8-7 FlashCopy Consistency Group

Note: After an individual FlashCopy mapping has been added to a Consistency Group, it can only be managed as part of the group. Operations such as prepare, start, and stop are no longer allowed on the individual mapping.

Dependent writes
To illustrate why it is crucial to use Consistency Groups when a data set spans multiple volumes, consider the following typical sequence of writes for a database update transaction: 1. A write is executed to update the database log, indicating that a database update is about to be performed. 2. A second write is executed to perform the actual update to the database. 3. A third write is executed to update the database log, indicating that the database update has completed successfully. The database ensures the correct ordering of these writes by waiting for each step to complete before starting the next step. However, if the database log (updates 1 and 3) and the database itself (update 2) are on separate volumes, then it is possible for the FlashCopy of the database volume to occur prior to the FlashCopy of the database log. This can result in the target volumes seeing writes (1) and (3) but not (2), because the FlashCopy of the database volume occurred before the write was completed. In this case, if the database was restarted using the backup that was made from the FlashCopy target volumes, the database log indicates that the transaction had completed successfully when in fact it had not, because the FlashCopy of the volume with the database file was started (bitmap was created) before the write had completed to the volume. Therefore, the transaction is lost and the integrity of the database is in question. To overcome the issue of dependent writes across volumes and to create a consistent image of the client data, it is necessary to perform a FlashCopy operation on multiple volumes as an atomic operation. To accomplish this the SVC supports the concept of Consistency Groups. A FlashCopy Consistency Group can contain up to 512 FlashCopy mappings (this is the maximum number of FlashCopy mappings supported by the SVC cluster). FlashCopy

384

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

commands can then be issued to the FlashCopy Consistency Group and thereby simultaneously for all of the FlashCopy mappings that are defined in the Consistency Group. For example, when issuing a FlashCopy start command to the Consistency Group, all of the FlashCopy mappings in the Consistency Group are started at the same time, resulting in a point-in-time copy that is consistent across all of the FlashCopy mappings that are contained in the Consistency Group.

Consistency Group with Multiple Target FlashCopy


It is important to note that a Consistency Group aggregates FlashCopy mappings, not volumes. Thus, where a source volume has multiple FlashCopy mappings, they can be in the same or separate Consistency Groups. If a particular volume is the source volume for multiple FlashCopy mappings, you might want to create separate Consistency Groups to separate each mapping of the same source volume. Regardless of if the source volume with multiple target volumes is in the same consistency group or in separate consistency groups, the resulting FlashCopy produces multiple identical copies of the source data.

Maximum configurations
Table 8-1 lists the FlashCopy properties and maximum configurations.
Table 8-1 FlashCopy properties and maximum configuration FlashCopy property FlashCopy targets per source Maximum 256 Comment This maximum is the maximum number of FlashCopy mappings that can exist with the same source volume. The number of mappings is no longer limited by the number of volumes in the cluster, so the FlashCopy component limit applies. This maximum is an arbitrary limit that is policed by the software. This maximum is a limit on the quantity of FlashCopy mappings using bitmap space from this I/O Group. This maximum configuration will consume all 512 MB of bitmap space for the I/O Group and allow no Metro and Global Mirror bitmap space. The default is 40 TB. This limit is due to the time that is taken to prepare a Consistency Group with a large number of mappings.

FlashCopy mappings per cluster

4,096

FlashCopy Consistency Groups per cluster FlashCopy volume space per I/O Group

127 1,024TB

FlashCopy mappings per Consistency Group

512

8.4.4 FlashCopy indirection layer


The FlashCopy indirection layer governs the I/O to both the source and target volumes when a FlashCopy mapping is started, which is done using a FlashCopy bitmap. The purpose of the FlashCopy indirection layer is to enable both the source and target volumes for read and write I/O immediately after the FlashCopy has been started. To illustrate how the FlashCopy indirection layer works, we examine what happens when a FlashCopy mapping is prepared and subsequently started.

Chapter 8. Advanced Copy Services

385

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

When a FlashCopy mapping is prepared and started, the following sequence is applied: 1. Flush write cache to the source volume or volumes that are part of a Consistency Group. 2. Put cache into write-through mode on the source volumes. 3. Discard cache for the target volumes. 4. Establish a sync point on all of the source volumes in the Consistency Group (creating the FlashCopy bitmap). 5. Ensure that the indirection layer governs all of the I/O to the source volumes and target volumes. 6. Enable cache on both the source volumes and target volumes. FlashCopy provides the semantics of a point-in-time copy using the indirection layer, which intercepts I/O directed at either the source or target volumes. The act of starting a FlashCopy mapping causes this indirection layer to become active in the I/O path, which occurs automatically across all FlashCopy mappings in the Consistency Group. The indirection layer then determines how each I/O is to be routed based on the following factors: The volume and the logical block address (LBA) to which the I/O is addressed Its direction (read or write) The state of an internal data structure, the FlashCopy bitmap The indirection layer allows the I/O to go through to the underlying volume; redirects the I/O from the target volume to the source volume; or queues the I/O while it arranges for data to be copied from the source volume to the target volume. To explain in more detail which action is applied for each I/O, we first look at the FlashCopy bitmap.

8.4.5 Grains and the FlashCopy bitmap


When data is copied between volumes, it is copied in units of address space known as grains. Grains are units of data grouped together to optimize the use of the bitmap that keeps track of changes to the data between the source and target volume. You have the option of using 64KB or 256 KB grain sizes; 256 KB is the default. The FlashCopy bitmap contains one bit for each grain and is used to keep track of whether the source grain has been copied to the target. Note that the 64KB grain size consumes bitmap space at a rate of four times the default 256 KB size. The FlashCopy bitmap dictates read and write behavior for both the source and target volumes.

Source reads
Reads are performed from the source volume. This is the same as for non-FlashCopy volumes.

Source writes
Writes to the source will cause the grain to be copied to the target if it has not already been copied, the bitmap updated, then the write will be performed to the source.

Target reads
Reads are performed from the target if the grain has already been copied. Otherwise, the read is performed from the source and no copy is performed.

386

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

Target writes
Writes to the target will cause the grain to be copied from the source to the target unless the entire grain is being update on the target. In this case the target will be marked as split with the source (if there is no I/O error during the write) and the write will go directly to the target.

The FlashCopy indirection layer algorithm


Imagine the FlashCopy indirection layer as the I/O traffic director when a FlashCopy mapping is active. The I/O is intercepted and handled according to whether it is directed at the source volume or at the target volume, depending on the nature of the I/O (read or write) and the state of the grain (whether it has been copied). In Figure 8-8, we illustrate how the background copy runs while I/Os are handled according to the indirection layer algorithm.

Figure 8-8 I/O processing with FlashCopy

8.4.6 Interaction and dependency between Multiple Target FlashCopy mappings


Figure 8-9 on page 388 represents a set of four FlashCopy mappings that share a common source. The FlashCopy mappings will target volumes Target 0, Target 1, Target 2, and Target 3.

Chapter 8. Advanced Copy Services

387

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 8-9 Interactions between MTFC mappings

Target 0 is not dependent on a source, because it has completed copying. Target 0 has two dependent mappings (Target 1 and Target 2). Target 1 is dependent upon Target 0. It will remain dependent until all of Target 1 has been copied. Target 2 is dependent on it, because Target 2 is 20% copy complete. After all of Target 1 has been copied, it can then move to the idle_copied state. Target 2 is dependent upon Target 0 and Target 1 and will remain dependent until all of Target 2 has been copied. No target is dependent on Target 2, so when all of the data has been copied to Target 2, it can move to the Idle_copied state. Target 3 has actually completed copying, so it is not dependent on any other maps.

Target writes (with Multiple Target FlashCopy)


A write to an intermediate or newest target volume must consider the state of the grain within its own mapping, and the state of the grain of the next oldest mapping. If the grain of the next oldest mapping has not yet been copied, it must be copied before the write is allowed to proceed to preserve the contents of the next oldest mapping. The data written to the next oldest mapping comes from a target or source. If the grain in the target being written has not yet been copied, the grain is copied from the oldest already copied grain in the mappings that are newer than the target, or the source if none are already copied. After this copy has been done, the write can be applied to the target.

Target reads (with Multiple target FlashCopy)


If the grain being read has already been copied from the source to the target, then the read simply returns data from the target being read. If the grain has not been copied, then each of the newer mappings is examined in turn and the read is performed from the first copy found. If none are found, then the read is performed from the source.

388

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

Stopping the copy process


When a stop command is issued to a mapping that contains a target that has dependent mappings, the mapping will enter the stopping state and begin copying all grains that are uniquely held on the target volume of the mapping being stopped to the next oldest mapping that is in the Copying state. The mapping will remain in the stopping state until all grains have been copied and then enter the stopped state. Note about stopping the copy process: The stopping copy process can be ongoing for several mappings sharing the same source at the same time. At the completion of this process, the mapping will automatically make an asynchronous state transition to the Stopped state or the idle_copied state if the mapping was in the Copying state with progress = 100%. For example, if the mapping associated with Target 0 was issued a stopfcmap or stopfcconsistgrp command, then Target 0 enters the Stopping state while a process copies the data of Target 0 to Target 1. After all of the data has been copied, Target 0 enters the Stopped state, and Target 1 is no longer dependent upon Target 0, but Target 1 remains dependent on Target 2.

8.4.7 Summary of the FlashCopy indirection layer algorithm


Table 8-2 summarizes the indirection layer algorithm.
Table 8-2 Summary table of the FlashCopy indirection layer algorithm Volume being accessed Source Has the grain been copied? No Host I/O operation Read Read from source volume. Write Copy grain to most recently started target for this source, then write to the source. Write to source volume. Hold the write. Check the dependency target volumes to see if the grain has been copied. If the grain is not already copied to the next oldest target for this source, copy the grain to the next oldest target. Then, write to the target. Write to target volume.

Yes Target No

Read from source volume. If any newer targets exist for this source in which this grain has already been copied, read from the oldest of these targets. Otherwise, read from the source.

Yes

Read from target volume.

8.4.8 Interaction with the cache


This copy-on-write process introduces significant latency into write operations. To isolate the active application from this additional latency, the FlashCopy indirection layer is placed logically beneath the cache. Therefore, the additional latency introduced by the copy-on-write process is only encountered by internal cache destage operation and not by the application. In Figure 8-10 on page 390, we illustrate the logical placement of the FlashCopy indirection layer.

Chapter 8. Advanced Copy Services

389

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 8-10 Logical placement of the FlashCopy indirection layer

8.4.9 FlashCopy and image mode volumes


FlashCopy can be used with image mode volumes. Because the source and target volumes must be exactly the same size, when creating a FlashCopy mapping you must create a target volume with the exact same size as the image mode volume. To accomplish this, use the svcinfo lsvdisk -bytes volumeName command. The size in bytes is then used to create the volume to use in the FlashCopy mapping. This provides an exact number of bytes, as image mode volumes may not line up 1-to-1 on other measurement unit boundaries. In Example 8-1, we list the size of the Image_volume_A volume. Subsequently, the volume_A_copy volume is created, specifying the same size.
Example 8-1 Listing the size of a volume in bytes and creating a volume of equal size

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk Image_volume_A id 8 name Image_volume_A IO_group_id 0 IO_group_name io_grp0 status online storage_pool_id 2 storage_pool_name Storage_Pool_Image capacity 36.0GB type image . . . autoexpand warning grainsize IBM_2145:ITSO-CLS1:admin>svctask mkvolume -size 36 -unit gb -name volume_A_copy -mdiskgrp Storage_Pool_DS47 -vtype striped -iogrp 1 Virtual Disk, id [19], successfully created

390

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

Tip: Alternatively, you can use the expandvolumesize and shrinkvolumesize volume commands to modify the size of the volume. See 9.5.10, Expanding a volume on page 500 and 9.5.16, Shrinking a volume on page 505 for more information. But remember these actions must be performed before a mapping is created. You can use an image mode volume as either a FlashCopy source volume or target volume.

8.4.10 FlashCopy mapping events


In this section, we describe the events that modify the states of a FlashCopy. We describe the mapping events in Table 8-3. Overview of a FlashCopy sequence of events: 1. Associate the source data set with a target location (one or more source and target volumes). 2. Create a FlashCopy mapping for each source volume to the corresponding target volume. The target volume must be equal in size to the source volume. 3. Discontinue access to the target (application dependent). 4. Prepare (pre-trigger) the FlashCopy: a. Flush cache for the source. b. Discard cache for the target. 5. Start (trigger) the FlashCopy: a. Pause I/O (briefly) on the source. b. Resume I/O on the source. c. Start I/O on the target.
Table 8-3 Mapping events Mapping event Create Description A new FlashCopy mapping is created between the specified source volume and the specified target volume. The operation fails if any of the following conditions is true: For SAN Volume Controller software Version 4.1.0 or earlier, the source or target volume is already a member of a FlashCopy mapping. For SAN Volume Controller software Version 4.2.0 or later, the source or target volume is already a target volume of a FlashCopy mapping. For SAN Volume Controller software Version 4.2.0 or later, the source volume is already a member of 16 FlashCopy mappings. For SAN Volume Controller software Version 4.3.0 or later, the source volume is already a member of 256 FlashCopy mappings. The node has insufficient bitmap memory. The source and target volume sizes differ.

Chapter 8. Advanced Copy Services

391

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

Mapping event Prepare

Description The prestartfcmap or prestartfcconsistgrp command is directed to either a Consistency Group for FlashCopy mappings that are members of a normal Consistency Group or to the mapping name for FlashCopy mappings that are stand-alone mappings. The prestartfcmap or prestartfcconsistgrp command places the FlashCopy mapping into the Preparing state. Important: The prestartfcmap or prestartfcconsistgrp command can corrupt any data that previously resided on the target volume because cached writes are discarded. Even if the FlashCopy mapping is never started, the data from the target might have logically changed during the act of preparing to start the FlashCopy mapping. The FlashCopy mapping automatically moves from the Preparing state to the Prepared state after all cached data for the source is flushed and all cached data for the target is no longer valid. When all of the FlashCopy mappings in a Consistency Group are in the Prepared state, the FlashCopy mappings can be started. To preserve the cross-volume Consistency Group, the start of all of the FlashCopy mappings in the Consistency Group must be synchronized correctly with respect to I/Os that are directed at the volumes by using the startfcmap or startfcconsistgrp command. The following actions occur during the startfcmap or startfcconsistgrp commands run: New reads and writes to all source volumes in the Consistency Group are paused in the cache layer until all ongoing reads and writes beneath the cache layer are completed. After all FlashCopy mappings in the Consistency Group are paused, the internal cluster state is set to allow FlashCopy operations. After the cluster state is set for all FlashCopy mappings in the Consistency Group, read and write operations continue on the source volumes. The target volumes are brought online. As part of the startfcmap or startfcconsistgrp command, read and write caching is enabled for both the source and target volumes. You can modify the following FlashCopy mapping properties: FlashCopy mapping name Clean rate Consistency Group Copy rate (for background copy) Automatic deletion of the mapping when the background copy is complete There are two separate mechanisms by which a FlashCopy mapping can be stopped: You have issued a command. An I/O error has occurred. This command requests that the specified FlashCopy mapping be deleted. If the FlashCopy mapping is in the Stopped state, the force flag must be used. If the flush of data from the cache cannot be completed, the FlashCopy mapping enters the Stopped state.

Flush done

Start

Modify

Stop

Delete

Flush failed

392

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

Mapping event Copy complete

Description After all of the source data has been copied to the target and there are no dependent mappings, the state is set to Copied. If the option to automatically delete the mapping after the background copy completes is specified, the FlashCopy mapping is automatically deleted. If this option is not specified, the FlashCopy mapping is not automatically deleted and can be reactivated by preparing and starting again. The node has failed.

Bitmap online/offline

8.4.11 FlashCopy mapping states


In this section, we describe the states of a FlashCopy mapping in more detail.

Idle_or_copied
Read and write caching is enabled for both the source and the target. A FlashCopy mapping exists between the source and target but the source and target behave as independent volumes in this state.

Copying
The FlashCopy indirection layer governs all I/O to the source and target volumes while the background copy is running. The background copy process is copying grains from the source to the target. Reads and writes are executed on the target as though the contents of the source were instantaneously copied to the target during the startfcmap or startfcconsistgrp command. The source and target can be independently updated. Internally, the target depends on the source for certain tracks. Read and write caching is enabled on the source and the target.

Stopped
The FlashCopy was stopped either by a user command or by an I/O error. When a FlashCopy mapping is stopped, the integrity of the data on the target volume is lost. Therefore, while the FlashCopy mapping is in this state, the target volume is in the Offline state. To regain access to the target, the mapping must be started again (the previous point-in-time will be lost) or the FlashCopy mapping must be deleted. The source volume is accessible, and read/write caching is enabled for the source. In the Stopped state, a mapping can either be prepared again or deleted.

Stopping
The mapping is in the process of transferring data to a depend mapping. The behavior of the target volume depends on whether the background copy process had completed while the mapping was in the Copying state. If the copy process had completed, the target volume remains online while the stopping copy process completes. If the copy process had not completed, data in the cache is discarded for the target volume. The target volume is taken offline, and the stopping copy process runs. After the data has been copied, a stop complete asynchronous event notification is issued. The mapping will move to the Idle/Copied state if the background copy has completed or to the Stopped state if the background copy has not completed. The source volume remains accessible for I/O.

Chapter 8. Advanced Copy Services

393

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

Suspended
The FlashCopy was in the Copying or Stopping state when access to the metadata was lost. As a result both the source and target volumes are offline and the background copy process has been halted. When the metadata becomes available again, the FlashCopy mapping will return to the Copying or Stopping state. Access to the source and target volumes will be restored, and the background copy or stopping process will resume. Unflushed data that was written to the source or target before the FlashCopy was suspended is pinned in cache until the FlashCopy mapping leaves the Suspended state.

Preparing
The FlashCopy is in the process of preparing the mapping. While in this state, data from cache is destaged to disk and a consistent copy of the source exists on disk. At this time cache is operating in write-through mode and therefore writes to the source volume will experience additional latency. The target volume is reported as online, but will not perform reads or writes. These reads and writes are failed by the SCSI front-end. Before starting the FlashCopy mapping, it is important that any cache at the host level, for example, buffers on the host operating system or application, are also instructed to flush any outstanding writes to the source volume. Performing the cache flush required as part of the startfcmap or startfcconsistgrp command causes I/Os to be delayed waiting for the cache flush to complete. To overcome this problem, SVC FlashCopy supports the prestartfcmap or prestartfcconsistgrp commands, which prepare for a FlashCopy start while still allowing I/Os to continue to the source volume. In the Preparing state, the FlashCopy mapping is prepared by the following steps: 1. Flushing any modified write data associated with the source volume from the cache. Read data for the source will be left in the cache. 2. Placing the cache for the source volume into write-through mode, so that subsequent writes wait until data has been written to disk before completing the write command that is received from the host. 3. Discarding any read or write data that is associated with the target volume from the cache.

Prepared
When in the Prepared state, the FlashCopy mapping is ready to perform a start. While the FlashCopy mapping is in this state, the target volume is in the Offline state. In the Prepared state, writes to the source volume experience additional latency because the cache is operating in write-through mode.

Summary of FlashCopy mapping states


Table 8-4 on page 395 lists the various FlashCopy mapping states and the corresponding states of the source and target volumes.

394

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

Table 8-4 FlashCopy mapping state summary State Online/Offline Idling/Copied Copying Stopped Stopping Online Online Online Online Source Cache state Write-back Write-back Write-back Write-back Online/Offline Online Online Offline Online if copy complete Offline if copy not complete Offline Online but not accessible Online but not accessible Target Cache state Write-back Write-back N/A N/A

Suspended Preparing Prepared

Offline Online Online

Write-back Write-through Write-through

N/A N/A N/A

8.4.12 Thin-provisioned FlashCopy


FlashCopy source and target volumes can be thin-provisioned.

Either source or target thin-provisioned


The most common configuration is a fully allocated source and a thin-provisioned target. This allows the target to consume a smaller amount of real storage than the source. With this configuration, only use the NOCOPY (background copy rate = 0%) option. Although the COPY option is supported, this creates a fully allocated target and thereby defeat the purpose of thin provisioning.

Source and target both thin-provisioned


When both the source and target volumes are thin-provisioned, only the data allocated to the source will be copied to the target. In this configuration the background copy option will have no effect. Note: Best performance is obtained when the grain size of the thin-provisioned volume is the same as the grain size of the FlashCopy mapping.

Thin-provisioned incremental FlashCopy


The implementation of thin-provisioned volumes does not preclude the use of incremental FlashCopy on the same volumes. It does not make sense to have a fully allocated source volume and then use incremental FlashCopy (which is always a full copy the first time) to copy this fully allocated source volume to a thin-provisioned target volume; however, it is not prohibited. Optional configuration: A thin-provisioned source volume can be incrementally copied using FlashCopy to a thin-provisioned target volume. Whenever the FlashCopy is performed, only data that has been modified is recopied to the target. Note that if space is allocated on the target because of I/O to the target volume, this space will not be reclaimed with subsequent FlashCopy operations.

Chapter 8. Advanced Copy Services

395

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

A fully allocated source volume can be incrementally copied using FlashCopy to another fully allocated volume at the same time as being copied to multiple thin-provisioned targets (taken at separate points in time). This combination allows a single full backup to be kept for recovery purposes and separates the backup workload from the production workload, and at the same time, allowing older thin-provisioned backups to be retained.

8.4.13 Background copy


With FlashCopy background copy enabled, the source volume data will be copied to the corresponding target volume. With the FlashCopy background copy disabled, only data that changed on the source volume will be copied to the target volume. The benefit of using a FlashCopy mapping with background copy enabled is that the target volume becomes a real clone (independent from the source volume) of the FlashCopy mapping source volume after the copy is complete. When the background copy function is not performed, the target volume only remains a valid copy of the source data while the FlashCopy mapping remains in place. The background copy rate is a property of a FlashCopy mapping defined as a value between 0 and 100. The background copy rate can be defined and dynamically changed for individual FlashCopy mappings. A value of 0 disables background copy. The relationship of the background copy rate value to the attempted number of grains to be copied per second is shown in Table 8-5.
Table 8-5 Background copy rate Value 1 - 10 11 - 20 21 - 30 31 - 40 41 - 50 51 - 60 61 - 70 71 - 80 81 - 90 91 - 100 Data copied per second 128 KB 256 KB 512 KB 1 MB 2 MB 4 MB 8 MB 16 MB 32 MB 64 MB Grains per second (256 KByte Grain) 0.5 1 2 4 8 16 32 64 128 256 Grains per second (64 KByte Grain) 2 4 8 16 32 64 128 256 512 1024

The grains per second numbers represent the maximum number of grains that the SVC will copy per second, assuming that the bandwidth to the managed disks (MDisks) can accommodate this rate. If the SVC is unable to achieve these copy rates because of insufficient bandwidth from the SVC nodes to the MDisks, then background copy I/O contends for resources on an equal basis with the I/O that is arriving from the hosts. Both background copy I/O and I/O that is arriving from the hosts tend to see an increase in latency and a consequential reduction in throughput. Both background copy and foreground I/O continue to make forward progress, and do not stop, hang, or cause the node to fail. The background copy is performed by both nodes of the I/O Group in which the source volume resides. 396
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

8.4.14 Synthesis
The FlashCopy functionality in SVC simply creates copy volumes. All of the data in the source volume is copied to the destination volume, including operating system, logical volume manager, and application metadata. Note: Certain operating systems are unable to use FlashCopy without an additional step, which is termed synthesis. In summary, synthesis performs a type of transformation on the operating system metadata on the target volume so that the operating system can use the disk.

8.4.15 Serialization of I/O by FlashCopy


In general, the FlashCopy function in the SVC introduces no explicit serialization into the I/O path. Therefore, many concurrent I/Os are allowed to the source and target volumes. However, there is a lock for each grain. The lock can be in shared or exclusive mode. For multiple targets, a common lock is shared and the mappings are derived from a particular source volume. The lock is used in the following modes under the following conditions: The lock is held in shared mode for the duration of a read from the target volume, which touches a grain that has not been copied from the source. The lock is held in exclusive mode while a grain is being copied from the source to the target. If the lock is held in shared mode, and another process wants to use the lock in shared mode, this request is granted unless a process is already waiting to use the lock in exclusive mode. If the lock is held in shared mode and it is requested to be exclusive, the requesting process must wait until all holders of the shared lock free it. Similarly, if the lock is held in exclusive mode, a process wanting to use the lock in either shared or exclusive mode must wait for it to be freed.

8.4.16 Event handling


When a FlashCopy mapping is not copying or stopping, the FlashCopy function does not affect the handling or reporting of events for error conditions encountered in the I/O path. Event handling and reporting are only affected by FlashCopy when a FlashCopy mapping is copying or stopping or in other words, actively moving data. We describe these scenarios in the following sections.

Node failure
Normally, two copies of the FlashCopy bitmaps are maintained. One copy of the FlashCopy bitmaps is on each of the two nodes making up the I/O Group of the source volume. When a node fails, one copy of the bitmaps, for all FlashCopy mappings whose source volume is a member of the failing nodes I/O Group, will become inaccessible. FlashCopy will continue with a single copy of the FlashCopy bitmap being stored as non-volatile in the remaining node in the source I/O Group. The cluster metadata is updated to indicate that the missing node no longer holds a current bitmap. When the failing node recovers, or a replacement node is added to the I/O Group, the bitmap redundancy will be restored.

Chapter 8. Advanced Copy Services

397

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

Path failure (Path Offline state)


In a fully functioning cluster, all of the nodes have a software representation of every volume in the cluster within their application hierarchy. Because the storage area network (SAN) that links the SVC nodes to each other and to the MDisks is made up of many independent links, it is possible for a subset of the nodes to be temporarily isolated from several of the MDisks. When this situation happens, the managed disks are said to be Path Offline on certain nodes. Other nodes: Other nodes might see the managed disks as Online, because their connection to the managed disks is still functioning. When an MDisk enters the Path Offline state on an SVC node, all of the volumes that have extents on the MDisk also become Path Offline. Again, this situation happens only on the affected nodes. When a volume is Path Offline on a particular SVC node, the host access to that volume through the node will fail with the SCSI check condition indicating Offline.

Path Offline for the source volume


If a FlashCopy mapping is in the Copying state and the source volume goes Path Offline, this Path Offline state is propagated to all target volumes up to but not including the target volume for the newest mapping that is 100% copied but remains in the Copying state. If no mappings are 100% copied, all of the target volumes are taken offline. Again, note that Path Offline is a state that exists on a per-node basis. Other nodes might not be affected. If the source volume comes Online, the target and source volumes are brought back Online.

Path Offline for the target volume


If a target volume goes Path Offline but the source volume is still Online, and if there are any dependent mappings, those target volumes will also go Path Offline. The source volume will remain Online.

8.4.17 Asynchronous notifications


FlashCopy raises informational event log entries for certain mapping and Consistency Group state transitions. These state transitions occur as a result of configuration events that complete asynchronously, and the informational events can be used to generate Simple Network Management Protocol (SNMP) traps to notify the user. Other configuration events complete synchronously, and no informational events are logged as a result of these events: PREPARE_COMPLETED: This state transition is logged when the FlashCopy mapping or Consistency Group enters the Prepared state as a result of a user request to prepare. The user can now start (or stop) the mapping or Consistency Group. COPY_COMPLETED: This state transition is logged when the FlashCopy mapping or Consistency Group enters the Idle_or_copied state when it was previously in the Copying or Stopping state. This state transition indicates that the target disk now contains a complete copy and no longer depends on the source. STOP_COMPLETED: This state transition is logged when the FlashCopy mapping or Consistency Group has entered the Stopped state as a result of a user request to stop. It will be logged after the automatic copy process has completed. This state transition includes mappings where no copying needed to be performed. This state transition differs from the event that is logged when a mapping or group enters the Stopped state as a result of an I/O error.

398

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

8.4.18 Interoperation with Metro Mirror and Global Mirror


FlashCopy can work together with Metro Mirror and Global Mirror to provide better protection of the data. For example, we can perform a Metro Mirror copy to duplicate data from Site_A to Site_B and, then perform a daily FlashCopy to backup the data to another location. Table 8-6 lists which combinations of FlashCopy and Remote Copy are supported. In the table, remote copy refers to Metro Mirror and Global Mirror.
Table 8-6 FlashCopy and remote copy interaction Component FlashCopy Source Remote copy primary site Supported Remote copy secondary site Supported Latency: When the FlashCopy relationship is in the Preparing and Prepared states, the cache at the remote copy secondary site operates in write-through mode. This process adds additional latency to the already latent remote copy relationship. This is a supported combination with the main restriction that the FlashCopy mapping cannot be copying, stopping, or suspended. Otherwise the restrictions the same as at the Remote Copy primary site.

FlashCopy Target

This is a support combination. It has several restrictions: 1) issuing a stop -force may cause the Remote Copy relationship to need to be full re-synced. 2)Code level must be 6.2.x or higher. 3)IO Group must be the same.

8.4.19 FlashCopy presets


The GUI interface provides three FlashCopy presets (Snapshot, Clone, Backup) to simplify the more common FlashCopy operations. Note that although these presets will meet the majority of FlashCopy requirements, they do not provide support for all possible FlashCopy options. If more specialized options are required that are not supported by the presets, they will need to be performed using CLI commands. The following section describe the three preset options and their use cases.

Snapshot
Options: If Auto-Create Target Thin-provisioned target with rsize = 0 Autoexpand=on Target pool is primary copy source pool No background copy Use case The user wants to produce a copy of a volume without impacting the availability of the volume. The user does not anticipate a large number of changes to be made to the source or target volume; a significant proportion of the volumes will not be changed. By ensuring that only changes require a copy of data to be made, the total amount of disk space required for the

Chapter 8. Advanced Copy Services

399

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

copy is significantly reduced, and so allows for many such snapshot copies to be used in the environment. Snapshots are therefore useful for providing protection against corruption or similar issues with the validity of the data, but do not provide protection from physical controller failures. Snapshots can also provide a vehicle for performing repeatable testing including what-if modeling based on production data without requiring a full copy of the data to be provisioned.

Clone
Options: If auto-create target Created volume identical to primary copy of source volume (including storage pool) Auto-Delete Clean Rate = 0 Background Copy Rate = 50 Use case Users want a copy of the volume that they can modify without impacting the original. After the clone is established, there is no expectation that it will be refreshed or that there will be any further need to reference the original production data again. If the source is thin-provisioned, then the target will be thin-provisioned for auto-create target.

Backup
Options: If auto-create target Created volume identical to primary copy of source volume Incremental Clean Rate = 0 Background Copy Rate = 50 Use case The user wants to create a copy of the volume that can be used as a backup in the event that the source becomes unavailable, as in the case of the loss of the underlying physical controller. The user plans to periodically update the secondary copy and does not want to suffer the overhead of creating a completely new copy each time (and incremental FlashCopy times are faster than full copy, which helps to reduce the window where the new backup is not yet fully effective). If the source is thin-provisioned, then the target will be thin-provisioned on this one for auto-create target. Another use case here, which is not supported by the name, is to create and maintain (periodically refresh) an independent image that can be subjected to intensive I/O (for example, data mining) without impacting source volume performance.

8.5 Volume Mirroring and migration options


Volume Mirroring is a simple RAID-1 like function, that is designed to allow a volume to remain online even when the storage pool backing it becomes inaccessible. This is designed to protect the volume from storage infrastructure failures by providing the ability to seamlessly mirror between storage pools. This function is provided by a specific volume mirroring function in the I/O stack, and cannot be manipulated like a FlashCopy or other types of copy volumes. This feature does however provide migration functionality, which can be obtained by splitting the mirrored copy from the

400

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

source or using the migrate to This feature does not have the ability to control back-end storage mirror or replication. With this feature, host IO completes when both copies are written. Prior to 6.3.0 this feature would take a copy offline when it had a IO time-out, and then resynchronize with the online copy once it recovered. With 6.3.0, this feature has been enhanced with a tunable latency tolerance. This is designed to provide an option to give preference to loosing the redundancy between the two copies. This tunable time-out value is either Latency or Redundancy. The Latency tuning option (set with svctask chvdisk -mirrowritepriority latency) is the the default and was the behavior found in releases prior to 6.3.0 and prioritizes host I/O latency. This yields a preference to host IO over availability. However, you may have a need in your environment to give preference to Redundancy when availability is more important than IO response time. This is done with svctask chvdisk -mirrowritepriority redundancy. Regardless of which option you choose, Volume Mirroring can provide extra protection for your environment. With regard to migration there are several options available: Export to Image mode: This allows you to move storage from managed mode to image mode, useful if you are using the SVC as a migration device. For example, vendor As product cannot communicate with vendor Bs product, but you need to migrate existing data from vendor A to vendor B. Using Export to image mode allows you to migrate data using Copy Services functions and then return control to the native array, while maintaining access to the hosts. Import to Image mode: This allows you to import an existing storage mdisk or LUN with existing data, from an external storage system without putting metadata on it, so the existing data remains intact. Once imported all copy services functions maybe used to migrate the storage to other locations, while the data remains accessible to your hosts. Volume migration using Volume Mirroring then Split into New Volume: Allows you to leverage the RAID-1 functionality available to create two copies of data that initially have a set relationship (one primary and one secondary) but then break the relationship (both primary and no relationship) and make them independent copies of data. You can use this to migrate data between storage pools and devices. You might use this option if you want to move volumes to multiple different storage pools. Note that only can only mirror one volume at a time. Volume migration using Move to Another Pool: This option allows any volume to be moved between storage pools without interruption to host access. This is effectively a quicker version of the Volume Mirroring Split into New Volume. You might use this option if you want to move volumes in a single step or do not already have a volume mirror copy. Note: While the migration methods listed above do not disrupt access, you will need to take a brief outage to install the host drivers for your SVC. See SC26-7905 IBM System Storage SAN Volume Controller Host Attachment Users Guide for more detail. Make sure to consult the revision of the document that applies for your SVC.

Managing Volume Mirror and migration with the GUI


To make a volume mirror via the GUI, you need to select Add Mirrored Copy from the volume options menu as shown in Figure 8-11 on page 402.

Chapter 8. Advanced Copy Services

401

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 8-11 Add Volume Mirrored Copy

Once you do this, you will receive an option to specify a the type of volume mirror to make generic or thin provisioned and select the storage pool to use for the copy as below in. Make sure you select a storage pool with sufficient space and similar performance characteristics. Then select Add Copy.

402

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

Figure 8-12 Confirm Volume Mirror type and storage pool to use for the mirror

After you create your mirror, you can view the distribution of extents as shown in Figure 8-13 on page 404 or you can view the mirroring progress percentage via Running Tasks as shown in Figure 8-14. Note: Extent distribution for the mirror copy is automatically balanced as well as possible within the storage pool selected.

Chapter 8. Advanced Copy Services

403

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 8-13 The distribution of extents for primary and mirror copy of a volume

Figure 8-14 Progress of a mirror copy creation as viewed via Running Tasks

After the copy completes, you have the option of splitting either copy of the mirror into a new standalone volume. This is shown below in Figure 8-15

404

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

Figure 8-15 Selection of Split into New Volume

After you select Split into New Volume on either Copy0 or Copy1 you are presented with the option to specify a new volume name and confirm the split, as shown in Figure 8-16

Figure 8-16 Confirmation of Volume Mirror split

After providing a new volume name (optional but advised) and confirming the split, you can see the results in Figure 8-17

Chapter 8. Advanced Copy Services

405

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 8-17 Results of Volume Mirror Split

Note: When you split a volume copy, the view of it will return to the pool in which it was created, not where the primary copy existed. If you want to migrate your volumes to another storage pool in one step instead of two, you can use the Migrate to Another Pool option as shown in Figure 8-18

Figure 8-18 Using the Migrate to Another Pool option

Note: You cannot migrate more than one volume at a time. For this reason Copy Services functions are more expedient if available. 406
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

If the volume has only one copy you are presented with a storage pool selection dialog. If it has two, you will be presented with a slight variation that allows you to choose the copy to migrate as shown below in Figure 8-19

Figure 8-19 Selecting destination storage pool of a mirrored volume

Note that the selection you are presented with on the above dialog denotes the current pool of each volume copy, so you can better determine which storage pool to use. Finally we explore the image mode import and image mode export. Both of these methods allow you to leverage all copy services functions on storage that contains pre-existing data. In order to import pre-existing storage you must select Pools MDisks by Pool Not in a Pool. And then select the storage you wish to import and right-click. When you do you will be presented with the option in Figure 8-20

Chapter 8. Advanced Copy Services

407

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 8-20 Import in Image Mode option

When you select Import you will receive a dialog that will allow you to import as a generic volume or using thin-provisioning as well as disable the cache if you so choose. This is shown in Figure 8-21

Figure 8-21 Import Wizard

After clicking Next you will be presented with the option to select an existing storage pool in which to place the imported volume. If you do not make a selection it will be imported into a default pool as shown in Figure 8-22

408

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

Figure 8-22 Sample of Default Migration Pool for Image Mode

To perform an export of a volume, it must be in managed mode not image mode. Select the volume and right-click as shown in

Figure 8-23 Export to Image Mode Option

You can export only one volume or copy at a time, and you will need to select a storage pool for it when you export it.

Figure 8-24 Select pool to export managed mode to image mode

When you click Finish you have exported the volume or copy to image mode. Using this ability you use the SVC as a data mover device to migrate between storage systems.

Chapter 8. Advanced Copy Services

409

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

8.6 Metro Mirror


In the following topics, we describe the Metro Mirror copy service, which is a synchronous remote copy function. Metro Mirror in SVC is similar to Metro Mirror in the IBM System Storage DS family, at a functional level, but it is a different implementation. SVC provides a single point of control when enabling Metro Mirror in your SAN, regardless of the disk subsystems that are used, so long as those disk subsystems are supported by the SVC. The general application of Metro Mirror is to maintain two real-time synchronized copies of a disk. Often, two copies are geographically dispersed between two SVC clusters, although it is possible to use Metro Mirror within a single cluster (within an I/O Group). If the master copy fails, you can enable a auxiliary copy for I/O operation. Tips: Note that intracluster Metro Mirror will consume more resources within the cluster as compared to an intercluster Metro Mirror relationship, where resource allocation is shared between the clusters. Licensing must also be doubled as both source and target are within the same cluster. Use intercluster Metro Mirror when possible. A typical application of this function is to set up a dual-site solution using two SVC clusters. The first site is considered the primary or production site, and the second site is considered the backup site or failover site, which is activated when a failure at the first site is detected.

8.6.1 Metro Mirror overview


Metro Mirror establishes a synchronous relationship between two volumes of equal size. The volumes in a Metro Mirror relationship are referred to as the master (primary) volume and the auxiliary (secondary) volume. Metro Mirror is primarily used in metropolitan area or geographical area up to a maximum distance of 300 kilometers to provide synchronous replication of data. With synchronous copies, host applications write to the master volume but do not receive confirmation that the write operation has completed until the data is written to the auxiliary volume. This action ensures that both the volumes have identical data when the copy completes. After the initial copy completes, the Metro Mirror function maintains a fully synchronized copy of the source data at the target site at all times. Keep in mind that increased distance will directly impact host IO performance, as the writes are synchronous. You should use the requirements for application performance when picking your Metro Mirror auxiliary location. Consistency Groups can be used to maintain data integrity for dependent writes, similar to FlashCopy Consistency Groups and Global Mirror Consistency Groups, which will be discussed later. The SVC provides both intracluster and intercluster Metro Mirror.

Intracluster Metro Mirror


Performs intracluster copying of a volume, in which both volumes belong to the same cluster and I/O Group within the cluster. Since it is within the same I/O group, there must be sufficient bitmap space within the I/O group for both sets of volumes, as well as licensing on the cluster. Note: Performing Metro Mirror across I/O Groups within a cluster is not supported.

410

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

Intercluster Metro Mirror


Performs intercluster copying of a volume, in which one volume belongs to a cluster and the other volume belongs to a different cluster. Two SVC clusters must be defined in an SVC partnership, which must be performed on both SVC clusters to establish a fully functional Metro Mirror partnership. Using standard single mode connections, the supported distance between two SVC clusters in a Metro Mirror partnership is 10 km (6.2 miles), although greater distances can be achieved by using extenders. For extended distance solutions, contact your IBM representative. Limit: When a local and a remote fabric are connected together for Metro Mirror purposes, the inter-switch link (ISL) hop count between a local node and a remote node cannot exceed seven.

8.6.2 Remote copy techniques


In this section we describe the differences between synchronous remote copy and asynchronous remote copy.

Synchronous remote copy


Metro Mirror is a fully synchronous remote copy technique that ensures, as long as writes to the auxiliary volumes are possible, that writes are committed at both the master and auxiliary volumes before write completion has been acknowledged to the host. Events such as a loss of connectivity between clusters can cause mirrored writes from the master and auxiliary volume to fail. In that case Metro Mirror suspends writes to the auxiliary volume and allows I/O to the master volume to continue, to avoid impacting the operation of the master volumes. Figure 8-25 illustrates how a write to the master volume is mirrored to the cache of the auxiliary volume before an acknowledgement of the write is sent back to the host that issued the write. This process ensures that the auxiliary is synchronized in real time, in case it is needed in a failover situation. However, this process also means that the application is exposed to the latency and bandwidth limitations (if any) of the communication link between the master and auxiliary volumes. This process might lead to unacceptable application performance, particularly when placed under peak load. Therefore, using Metro Mirror has distance limitations, based on your performance requirements, however the SVC will not support more than 300 kilometers.

Chapter 8. Advanced Copy Services

411

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 8-25 Write on volume in Metro Mirror relationship

8.6.3 Metro Mirror features


SVC Metro Mirror supports the following features: Synchronous remote copy of volumes dispersed over metropolitan distances. SVC implements Metro Mirror relationships between volume pairs, with each volume in a pair managed by an SVC cluster or IBM Storwize V7000 cluster (requires 6.3.0.) SVC supports intracluster Metro Mirror, where both volumes belong to the same cluster (and I/O Group). SVC supports intercluster Metro Mirror, where each volume belongs to a separate SVC cluster. You can configure a specific SVC cluster for partnership with another cluster. All intercluster Metro Mirror processing takes place between two SVC clusters that are configured in a partnership. Intercluster and intracluster Metro Mirror can be used concurrently. SVC does not require that a control network or fabric is installed to manage Metro Mirror. For intercluster Metro Mirror, SVC maintains a control link between two clusters. This control link is used to control the state and coordinate updates at either end. The control link is implemented on top of the same FC fabric connection that the SVC uses for Metro Mirror I/O. SVC implements a configuration model that maintains the Metro Mirror configuration and state through major events, such as failover, recovery, and resynchronization, to minimize user configuration action through these events. SVC allows resynchronization of changed data so that write failures occurring on either the master or auxiliary volumes do not require a complete resyncronization of the relationship.

412

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

8.6.4 Multiple Cluster Mirroring


Each SVC cluster can maintain up to three partner cluster relationships, allowing as many as four clusters to be directly associated with each other. This SVC partnership capability enables the implementation of disaster recovery (DR) solutions. Figure 8-26 shows an example of a Multiple Cluster Mirroring configuration.

Figure 8-26 Multiple Cluster Mirroring configuration example

Software level restrictions for Multiple Cluster Mirroring: Partnership between a cluster running 6.1.0 and a cluster running a version earlier than 4.3.1 is not supported. Clusters in a partnership where one cluster is running 6.1.0 and the other is running 4.3.1 cannot participate in additional partnerships with other clusters. Clusters that are all running either 6.1.0 or 5.1.0 can participate in up to three cluster partnerships. To use an IBM Storwize V7000 as a cluster partner, it must have 6.3.0 or newer code and be configured to operate in the replication layer. Layer settings are only available on the V7000.

Note: SVC 6.1 supports object names up to 63 characters. Previous levels only supported up to 15 characters. When SVC 6.1 clusters are partnered with 4.3.1 and 5.1.0 clusters, various object names will be truncated at 15 characters when displayed from 4.3.1 and 5.1.0 clusters.

Chapter 8. Advanced Copy Services

413

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

Supported Multiple Cluster Mirroring topologies


Multiple Cluster Mirroring allows for various partnership topologies as illustrated in the following examples:

Example: A B, A C, and A D

Figure 8-27 SVC star topology

Figure 8-27 shows four clusters in a star topology, with cluster A at the center. Cluster A can be a central DR site for the three other locations. Using a star topology, you can migrate applications by using a process like the one described in the following example: 1. Suspend application at A. 2. Remove the A B relationship. 3. Create the A C relationship (or alternatively, the B C relationship). 4. Synchronize to cluster C, and ensure A C is established: A B, A C, A D, B C, B D, and C D A B, A C, and B C

414

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

Example: A B, A C, and B C

Figure 8-28 SVC triangle topology

Example: A B, A C, A D, B D, and C D

Figure 8-29 SVC fully connected topology

Figure 8-29 is a fully connected mesh where every cluster has a partnership to each of the three other clusters. This allows volumes to be replicated between any pair of clusters.

Example: A B, A C, and B C
Figure 8-30 shows a daisy-chain topology.

Figure 8-30 SVC daisy-chain topology

Chapter 8. Advanced Copy Services

415

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

Note that although clusters can have up to three partnerships, volumes can only be part of one Remote Copy relationship, for example A B. Cluster Partnership Intermix: All of the above topologies are valid for intermix of the IBM Storwize V7000 with the SVC, so long as the V7000 is set to the replication layer and running 6.3.0 code.

Upgrade restriction: Upgrading a cluster to 6.1.0 requires that the partner cluster be running 4.3.1 or later. If the partner cluster is running 4.3.0, it must first be upgraded to 4.3.1.

8.6.5 Importance of write ordering


Many applications that use block storage must survive failures such as the loss of power or a software crash without losing the data that existed prior to the failure. Because many applications need to perform large numbers of update operations in parallel with storage, maintaining write ordering is key to ensuring the correct operation of applications following a disruption. An application that performs a high volume of database updates is usually designed with the concept of dependent writes. With dependent writes, it is important to ensure that an earlier write has completed before a later write is started. Reversing the order of dependent writes can undermine an applications algorithms and can lead to problems such as detected or undetected data corruption. See 8.4.3, Consistency Groups on page 383 for more information regarding dependent writes

Metro Mirror Consistency Groups


A Metro Mirror Consistency Group can contain an arbitrary number of relationships up to the maximum number of Metro Mirror relationships supported by the SVC cluster. Metro Mirror commands can be issued to a Metro Mirror Consistency Group, and therefore simultaneously for all Metro Mirror relationships defined within that Consistency Group, or to a single Metro Mirror relationship that is not part of a Metro Mirror Consistency Group. For example, when issuing a Metro Mirror startrcconsistgrp command to the Consistency Group, all of the Metro Mirror relationships in the Consistency Group are started at the same time. Figure 8-31 on page 417 illustrates the concept of Metro Mirror Consistency Groups. Because the MM_Relationship 1 and 2 are part of the Consistency Group, they can be handled as one entity. The stand-alone MM_Relationship 3 is handled separately.

416

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

Figure 8-31 Metro Mirror Consistency Group

Certain uses of Metro Mirror require manipulation of more than one relationship. Metro Mirror Consistency Groups can provide the ability to group relationships so that they are manipulated in unison. Consider the following points: Metro Mirror relationships can be part of a Consistency Group, or they can be stand-alone and therefore handled as single instances. A Consistency Group can contain zero or more relationships. An empty Consistency Group, with zero relationships in it, has little purpose until it is assigned its first relationship, except that it has a name. All relationships in a Consistency Group must have corresponding master and auxiliary volumes. Although it is possible to use Consistency Groups to manipulate sets of relationships that do not need to satisfy these strict rules, this manipulation can lead to undesired side effects. The rules behind a Consistency Group mean that certain configuration commands are prohibited. These configuration commands are not prohibited if the relationship is not part of a Consistency Group. For example, consider the case of two applications that are completely independent, yet they are placed into a single Consistency Group. In the event of an error there is a loss of synchronization, and a background copy process is required to recover synchronization. While this process is in progress, Metro Mirror rejects attempts to enable access to the auxiliary volumes of either application. If one application finishes its background copy much more quickly than the other application, Metro Mirror still refuses to grant access to its auxiliary volumes even though it is safe in this case, because Metro Mirror policy is to refuse access to the entire Consistency Group if any part of it is inconsistent.
Chapter 8. Advanced Copy Services

417

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

Stand-alone relationships and Consistency Groups share a common configuration and state model. All of the relationships in a non-empty Consistency Group have the same state as the Consistency Group.

8.6.6 Remote copy intercluster communication


All intercluster communication between clusters in a Metro Mirror and Global Mirror partnership is performed over the SAN. The following section provides details regarding this communication path.

Zoning
SVC node ports on each SVC cluster must be able to communicate with each other for the partnership creation to be performed. Switch zoning is critical to facilitating intercluster communication. See Chapter 3, Planning and configuration on page 67 for critical information regarding proper zoning for intercluster communication.

Intercluster communication channels


When an SVC cluster partnership has been defined on a pair of clusters, additional intercluster communication channels are established: A single control channel, which is used to exchange and coordinate configuration information I/O channels between each of these nodes in the clusters These channels are maintained and updated as nodes and links appear and disappear from the fabric, and are repaired to maintain operation where possible. If communication between SVC clusters is interrupted or lost, an event is logged (and consequently, the Metro Mirror and Global Mirror relationships will stop). Note: SVC can be configured to raise Simple Network Management Protocol (SNMP) traps to the enterprise monitoring system to alert on events indicating an interruption in internode communication has occurred.

Intercluster links
All SVC nodes maintain a database of other devices that are visible on the fabric. This database is updated as devices appear and disappear. Devices that advertise themselves as SVC nodes are categorized according to the SVC cluster to which they belong. SVC nodes that belong to the same cluster establish communication channels between themselves and begin to exchange messages to implement clustering and the functional protocols of SVC. Nodes that are in separate clusters do not exchange messages after initial discovery is complete, unless they have been configured together to perform a remote copy relationship. The intercluster link carries control traffic to coordinate activity between two clusters. It is formed between one node in each cluster. The traffic between the designated nodes is distributed among logins that exist between those nodes. If the designated node fails (or all of its logins to the remote cluster fail), then a new node is chosen to carry control traffic. This node change causes the I/O to pause, but it does not put the relationships in a ConsistentStopped state.

418

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

8.6.7 Metro Mirror attributes


The Metro Mirror function in SVC possesses the following attributes: 1. An SVC cluster partnership is created between two SVC clusters or an SVC Cluster and IBM Storwize V7000 operating in the replication layer (for intercluster Metro Mirror). 2. A Metro Mirror relationship is created between two volumes of the same size. 3. To manage multiple Metro Mirror relationships as one entity, relationships can be made part of a Metro Mirror Consistency Group, which ensures data consistency across multiple Metro Mirror relationships and provides ease of management. 4. When a Metro Mirror relationship is started, and when the background copy has completed, the relationship becomes consistent and synchronized. 5. After the relationship is synchronized, the auxiliary volume holds a copy of the production data at the primary, which can be used for DR. 6. To access the auxiliary volume, the Metro Mirror relationship must be stopped with the access option enabled before write I/O will be allowed to the auxiliary. 7. The remote host server is mapped to the auxiliary volume, and the disk is available for I/O.

8.6.8 Methods of synchronization


This section describes two methods that can be used to establish a synchronized relationship.

Full synchronization after creation


The full synchronization after creation method is the default method. It is the simplest in that it requires no administrative activity apart from issuing the necessary commands. However, in certain environments, the available bandwidth can make this method unsuitable. Use this command sequence for a single relationship: 1. Run mkrcrelationship without specifying the -sync option. 2. Run startrcrelationship without specifying the -clean option.

Synchronized before creation


In this method, the administrator must ensure that the master and auxiliary volumes contain identical data before creating the relationship. Both disks are created with the security delete feature so as to make all data zero. A complete tape image (or other method of moving data) is copied from one disk to the other disk. With this technique, do not allow I/O on the master or auxiliary before the relationship is established. Then, the administrator must run these commands: Run mkrcrelationship with the -sync flag. Run startrcrelationship without the -clean flag. Attention: Failure to perform these steps correctly can cause Metro Mirror to report the relationship as consistent when it is not, thereby creating a data loss or data integrity exposure for hosts accessing data on the auxiliary volume.

Chapter 8. Advanced Copy Services

419

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

8.6.9 Metro Mirror states and events


In this section we describe the various states of a Metro Mirror relationship and the conditions that cause them to change. In Figure 8-32, the Metro Mirror relationship state diagram shows an overview of states that can apply to a Metro Mirror relationship in a connected state.

Figure 8-32 Metro Mirror mapping state diagram

When creating the Metro Mirror relationship, you can specify if the auxiliary volume is already in sync with the master volume, and the background copy process is then skipped. This capability is especially useful when creating Metro Mirror relationships for volumes that have been created with the format option. The step identifiers in Figure 8-32 are described here. Step 1 a. The Metro Mirror relationship is created with the -sync option, and the Metro Mirror relationship enters the ConsistentStopped state. b. The Metro Mirror relationship is created without specifying that the master and auxiliary volumes are in sync, and the Metro Mirror relationship enters the InconsistentStopped state. Step 2

420

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

a. When starting a Metro Mirror relationship in the ConsistentStopped state, the Metro Mirror relationship enters the ConsistentSynchronized state. Therefore, no updates (write I/O) have been performed on the master volume while in the ConsistentStopped state. Otherwise, the -force option must be specified, and the Metro Mirror relationship then enters the InconsistentCopying state, while the background copy is started. b. When starting a Metro Mirror relationship in the InconsistentStopped state, the Metro Mirror relationship enters the InconsistentCopying state, while the background copy is started. Step 3 When the background copy completes, the Metro Mirror relationship transitions from the InconsistentCopying state to the ConsistentSynchronized state. Step 4 a. When stopping a Metro Mirror relationship in the ConsistentSynchronized state, specifying the -access option, which enables write I/O on the auxiliary volume, the Metro Mirror relationship enters the Idling state. b. To enable write I/O on the auxiliary volume, when the Metro Mirror relationship is in the ConsistentStopped state, issue the command svctask stoprcrelationship specifying the -access option, and the Metro Mirror relationship enters the Idling state. Step 5 a. When starting a Metro Mirror relationship that is in the Idling state, you must specify the -primary argument to set the copy direction. Given that no write I/O has been performed (to either the master or auxiliary volume) while in the Idling state, the Metro Mirror relationship enters the ConsistentSynchronized state. b. If write I/O has been performed to either the master or auxiliary volume, the -force option must be specified, and the Metro Mirror relationship then enters the InconsistentCopying state, while the background copy is started. Stop or Error: When a Metro Mirror relationship is stopped (either intentionally or due to an error), a state transition is applied: For example, the Metro Mirror relationships in the ConsistentSynchronized state enter the ConsistentStopped state, and the Metro Mirror relationships in the InconsistentCopying state enter the InconsistentStopped state. If the connection is broken between the SVC clusters in a partnership, then all (intercluster) Metro Mirror relationships enter a Disconnected state. For further information, refer to Connected versus disconnected on page 421. Common states: Stand-alone relationships and Consistency Groups share a common configuration and state model. All Metro Mirror relationships in a Consistency Group that is not empty have the same state as the Consistency Group.

State overview
in the following sections we provide an overview of the different Metro Mirror states.

Connected versus disconnected


Under certain error scenarios (for example, a power failure at one site causing one complete cluster to disappear), communications between two clusters in a Metro Mirror relationship can be lost. Alternatively, the fabric connection between the two clusters might fail, leaving the two clusters running but unable to communicate with each other.

Chapter 8. Advanced Copy Services

421

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

When the two clusters can communicate, the clusters and the relationships spanning them are described as connected. When they cannot communicate, the clusters and the relationships spanning them are described as disconnected. In this state, both clusters are left with fragmented relationships and will be limited regarding the configuration commands that can be performed. The disconnected relationships are portrayed as having a changed state. The new states describe what is known about the relationship and what configuration commands are permitted. When the clusters can communicate again, the relationships become connected again. Metro Mirror automatically reconciles the two state fragments, taking into account any configuration or other event that took place while the relationship was disconnected. As a result, the relationship can either return to the state that it was in when it became disconnected or enter a new state. Relationships that are configured between volumes in the same SVC cluster (intracluster) will never be described as being in a disconnected state.

Consistent versus inconsistent


Relationships that contain volumes that are operating as secondaries can be described as being consistent or inconsistent. Consistency Groups that contain relationships can also be described as being consistent or inconsistent. The consistent or inconsistent property describes the relationship of the data on the auxiliary to that on the master volume. It can be considered a property of the auxiliary volume itself. A auxiliary volume is described as consistent if it contains data that might have been read by a host system from the master if power had failed at an imaginary point in time while I/O was in progress, and power was later restored. This imaginary point in time is defined as the recovery point. The requirements for consistency are expressed with respect to activity at the master up to the recovery point: The auxiliary volume contains the data from all of the writes to the master for which the host received successful completion and that data had not been overwritten by a subsequent write (before the recovery point). For writes for which the host did not receive a successful completion (that is, it received bad completion or no completion at all), and the host subsequently performed a read from the master of that data and that read returned successful completion and no later write was sent (before the recovery point), the auxiliary contains the same data as that returned by the read from the master. From the point of view of an application, consistency means that an auxiliary volume contains the same data as the master volume at the recovery point (the time at which the imaginary power failure occurred). If an application is designed to cope with unexpected power failure, this guarantee of consistency means that the application will be able to use the auxiliary and begin operation just as though it had been restarted after the hypothetical power failure. Again, maintaining the application write ordering is the key property of consistency. See 8.4.3, Consistency Groups on page 383 for more information regarding dependent writes. If a relationship, or set of relationships, is inconsistent and an attempt is made to start an application using the data in the secondaries, a number of outcomes are possible: The application might decide that the data is corrupt and crash or exit with an event code. The application might fail to detect that the data is corrupt and return erroneous data.

422

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

The application might work without a problem. Because of the risk of data corruption, and in particular undetected data corruption, Metro Mirror strongly enforces the concept of consistency and prohibits access to inconsistent data. Consistency as a concept can be applied to a single relationship or a set of relationships in a Consistency Group. Write ordering is a concept that an application can maintain across a number of disks accessed through multiple systems; therefore, consistency must operate across all those disks. When deciding how to use Consistency Groups, the administrator must consider the scope of an applications data, taking into account all of the interdependent systems that communicate and exchange information. If two programs or systems communicate and store details as a result of the information exchanged, either of the following actions might occur: All of the data accessed by the group of systems must be placed into a single Consistency Group. The systems must be recovered independently (each within its own Consistency Group). Then, each system must perform recovery with the other applications to become consistent with them.

Consistent versus synchronized


A copy that is consistent and up-to-date is described as synchronized. In a synchronized relationship, the master and auxiliary volumes only differ in regions where writes are outstanding from the host. Consistency does not mean that the data is up-to-date. A copy can be consistent and yet contain data that was frozen at a point in time in the past. Write I/O might have continued to a master and not have been copied to the auxiliary. This state arises when it becomes impossible to keep up-to-date and maintain consistency. An example is a loss of communication between clusters when writing to the auxiliary. When communication is lost for an extended period of time, Metro Mirror tracks the changes that occurred on the master, but not the order of such changes or the details of such changes (write data). When communication is restored, it is impossible to synchronize the auxiliary without sending write data to the auxiliary out of order and, therefore, losing consistency. Two policies can be used to cope with this situation: Make a point-in-time copy of the consistent auxiliary before allowing the auxiliary to become inconsistent. In the event of a disaster before consistency is achieved again, the point-in-time copy target provides a consistent, although out-of-date, image. Accept the loss of consistency and the loss of a useful auxiliary, while synchronizing the auxiliary.

Detailed states
The following sections detail the states that are portrayed to the user, for either Consistency Groups or relationships. It also details additional information that is available in each state. The major states are designed to provide guidance about the configuration commands that are available.

Chapter 8. Advanced Copy Services

423

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

InconsistentStopped
InconsistentStopped is a connected state. In this state, the master is accessible for read and write I/O, but the auxiliary is not accessible for either read or write I/O. A copy process needs to be started to make the auxiliary consistent. This state is entered when the relationship or Consistency Group was InconsistentCopying and has either suffered a persistent error or received a stop command that has caused the copy process to stop. A start command causes the relationship or Consistency Group to move to the InconsistentCopying state. A stop command is accepted, but has no effect. If the relationship or Consistency Group becomes disconnected, the auxiliary side transitions to InconsistentDisconnected. The master side transitions to IdlingDisconnected.

InconsistentCopying
InconsistentCopying is a connected state. In this state, the master is accessible for read and write I/O, but the auxiliary is not accessible for either read or write I/O. This state is entered after a start command is issued to an InconsistentStopped relationship or a Consistency Group. It is also entered when a forced start is issued to an Idling or ConsistentStopped relationship or Consistency Group. In this state, a background copy process runs that copies data from the master to the auxiliary volume. In the absence of errors, an InconsistentCopying relationship is active, and the copy progress increases until the copy process completes. In certain error situations, the copy progress might freeze or even regress. A persistent error or stop command places the relationship or Consistency Group into an InconsistentStopped state. A start command is accepted but has no effect. If the background copy process completes on a stand-alone relationship, or on all relationships for a Consistency Group, the relationship or Consistency Group transitions to the ConsistentSynchronized state. If the relationship or Consistency Group becomes disconnected, the auxiliary side transitions to InconsistentDisconnected. The master side transitions to IdlingDisconnected.

ConsistentStopped
ConsistentStopped is a connected state. In this state, the auxiliary contains a consistent image, but it might be out-of-date with respect to the master. This state can arise when a relationship was in a ConsistentSynchronized state and suffers an error that forces a Consistency Freeze. It can also arise when a relationship is created with a CreateConsistentFlag set to TRUE. Normally, following an I/O error, subsequent write activity causes updates to the master and the auxiliary is no longer synchronized (set to false). In this case, to reestablish synchronization, consistency must be given up for a period. You must use a start command with the -force option to acknowledge this condition, and the relationship or Consistency Group transitions to InconsistentCopying. Enter this command only after all outstanding events have been repaired. In the unusual case where the master and the auxiliary are still synchronized (perhaps following a user stop, and no further write I/O was received), a start command takes the

424

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

relationship to ConsistentSynchronized. No -force option is required. Also, in this case, you can enter a switch command that moves the relationship or Consistency Group to ConsistentSynchronized and reverses the roles of the master and the auxiliary. If the relationship or Consistency Group becomes disconnected, the auxiliary transitions to ConsistentDisconnected. The master transitions to IdlingDisconnected. An informational status log is generated whenever a relationship or Consistency Group enters the ConsistentStopped state with a status of Online. You can configure this event to generate an SNMP trap that can be used to trigger automation or manual intervention to issue a start command following a loss of synchronization.

ConsistentSynchronized
ConsistentSynchronized is a connected state. In this state, the master volume is accessible for read and write I/O, and the auxiliary volume is accessible for read-only I/O. Writes that are sent to the master volume are sent to both the master and auxiliary volumes. Either successful completion must be received for both writes, the write must be failed to the host, or a state must transition out of the ConsistentSynchronized state before a write is completed to the host. A stop command takes the relationship to the ConsistentStopped state. A stop command with the -access parameter takes the relationship to the Idling state. A switch command leaves the relationship in the ConsistentSynchronized state, but it reverses the master and auxiliary roles. A start command is accepted, but it has no effect. If the relationship or Consistency Group becomes disconnected, the same transitions are made as for ConsistentStopped.

Idling
Idling is a connected state. Both master and auxiliary volumes operate in the master role. Consequently, both master and auxiliary volumes are accessible for write I/O. In this state, the relationship or Consistency Group accepts a start command. Metro Mirror maintains a record of regions on each disk that received write I/O while idling. This record is used to determine what areas need to be copied following a start command. The start command must specify the new copy direction. A start command can cause a loss of consistency if either volume in any relationship has received write I/O, which is indicated by the Synchronized status. If the start command leads to loss of consistency, you must specify the -force parameter. Following a start command, the relationship or Consistency Group transitions to ConsistentSynchronized if there is no loss of consistency, or to InconsistentCopying if there is a loss of consistency. Also, while in this state, the relationship or Consistency Group accepts a -clean option on the start command. If the relationship or Consistency Group becomes disconnected, both sides change their state to IdlingDisconnected.

IdlingDisconnected
IdlingDisconnected is a disconnected state. The volume or disks in this half of the relationship or Consistency Group are all in the master role and accept read or write I/O.

Chapter 8. Advanced Copy Services

425

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

The priority in this state is to recover the link to restore the relationship or consistency. No configuration activity is possible (except for deletes or stops) until the relationship becomes connected again. At that point, the relationship transitions to a connected state. The exact connected state that is entered depends on the state of the other half of the relationship or Consistency Group, which depends on these factors: The state when it became disconnected The write activity since it was disconnected The configuration activity since it was disconnected If both halves are IdlingDisconnected, the relationship becomes Idling when reconnected. While IdlingDisconnected, if a write I/O is received that causes loss of synchronization (synchronized attribute transitions from true to false) and the relationship was not already stopped (either through a user stop or a persistent error), an event is raised to notify you of the condition. This same event will also be raised when this condition occurs for the ConsistentSynchronized state.

InconsistentDisconnected
InconsistentDisconnected is a disconnected state. The volumes in this half of the relationship or Consistency Group are all in the auxiliary role and do not accept read or write I/O. No configuration activity, except for deletes, is permitted until the relationship becomes connected again. When the relationship or Consistency Group becomes connected again, the relationship becomes InconsistentCopying automatically unless either condition is true: The relationship was InconsistentStopped when it became disconnected. The user issued a stop command while disconnected. In either case, the relationship or Consistency Group becomes InconsistentStopped.

ConsistentDisconnected
ConsistentDisconnected is a disconnected state. The volumes in this half of the relationship or Consistency Group are all in the auxiliary role and accept read I/O but not write I/O. This state is entered from ConsistentSynchronized or ConsistentStopped when the auxiliary side of a relationship becomes disconnected. In this state, the relationship or Consistency Group displays an attribute of FreezeTime, which is the point in time that Consistency was frozen. When entered from ConsistentStopped, it retains the time that it had in that state. When entered from ConsistentSynchronized, the FreezeTime shows the last time at which the relationship or Consistency Group was known to be consistent. This time corresponds to the time of the last successful heartbeat to the other cluster. A stop command with the -access flag set to true transitions the relationship or Consistency Group to the IdlingDisconnected state. This state allows write I/O to be performed to the auxiliary volume and is used as part of a DR scenario. When the relationship or Consistency Group becomes connected again, the relationship or Consistency Group becomes ConsistentSynchronized only if this action does not lead to a loss of consistency. These conditions must be true: The relationship was ConsistentSynchronized when it became disconnected. No writes received successful completion at the master while disconnected.

426

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

Otherwise, the relationship become ConsistentStopped. The FreezeTime setting is retained.

Empty
This state only applies to Consistency Groups. It is the state of a Consistency Group that has no relationships and no other state information to show. It is entered when a Consistency Group is first created. It is exited when the first relationship is added to the Consistency Group, at which point, the state of the relationship becomes the state of the Consistency Group.

Background copy
Metro Mirror paces the rate at which background copy is performed by the appropriate relationships. Background copy takes place on relationships that are in the InconsistentCopying state with a status of Online. The quota of background copy (configured on the intercluster link) is divided evenly between all of the nodes that are performing background copy for one of the eligible relationships. This allocation is made irrespective of the number of disks for which the node is responsible. Each node in turn divides its allocation evenly between the multiple relationships performing a background copy. For intracluster relationships, each node is assigned a static quota of 25 MBps.

8.6.10 Practical use of Metro Mirror


The master volume is the production volume and updates to this copy are mirrored in real time to the auxiliary volume. The contents of the auxiliary volume that existed when the relationship was created are destroyed. Switching copy direction: The copy direction for a Metro Mirror relationship can be switched so the auxiliary volume becomes the master, and the master volume becomes the auxiliary, much like FlashCopys restore option. While the Metro Mirror relationship is active, the auxiliary volume is not accessible for host application write I/O at any time. The SVC allows read-only access to the auxiliary volume when it contains a consistent image. This time period is only intended to allow boot time operating system discovery to complete without error, so that any hosts at the secondary site can be ready to start up the applications with minimum delay, if required. For example, many operating systems must read logical block address (LBA) zero to configure a logical unit. Although read access is allowed at the auxiliary in practice, the data on the auxiliary volumes cannot be read by a host, because most operating systems write a dirty bit to the file system when it is mounted. Because this write operation is not allowed on the auxiliary volume, the volume cannot be mounted. This access is only provided where consistency can be guaranteed. However, there is no way in which coherency can be maintained between reads that are performed at the auxiliary and later write I/Os that are performed at the master. To enable access to the auxiliary volume for host operations, you must stop the Metro Mirror relationship by specifying the -access parameter. While access to the auxiliary volume for host operations is enabled, the host must be instructed to mount the volume and related tasks before the application can be started, or instructed to perform a recovery process.

Chapter 8. Advanced Copy Services

427

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

For example, the Metro Mirror requirement to enable the auxiliary copy for access differentiates it from third-party mirroring software on the host, which aims to emulate a single, reliable disk regardless of what system is accessing it. Metro Mirror retains the property that there are two volumes in existence, but it suppresses one volume while the copy is being maintained. Using an auxiliary copy demands a conscious policy decision by the administrator that a failover is required and that the tasks to be performed on the host involved in establishing operation on the auxiliary copy are substantial. The goal is to make this rapid (much faster when compared to recovering from a backup copy) but not seamless. The failover process can be automated through failover management software. The SVC provides Simple Network Management Protocol (SNMP) traps and programming (or scripting) for the command-line interface (CLI) to enable this automation.

8.6.11 Valid combinations of FlashCopy, Metro Mirror, and Global Mirror


Table 8-7 outlines the combinations of FlashCopy and Metro Mirror or Global Mirror functions that are valid for a single volume.
Table 8-7 Volume valid combination FlashCopy FlashCopy Source FlashCopy Target Metro Mirror or Global Mirror Master Supported Not supported Metro Mirror or Global Mirror Auxiliary Supported Not supported

8.6.12 Metro Mirror configuration limits


Table 8-8 lists the Metro Mirror configuration limits.
Table 8-8 Metro Mirror configuration limits Parameter Number of Metro Mirror Consistency Groups per cluster Number of Metro Mirror relationships per cluster Number of Metro Mirror relationships per Consistency Group Total volume size per I/O Group Value 256 8192 8192

There is a per I/O Group limit of 1024 TB on the quantity of master and auxiliary volume address space that can participate in Metro Mirror and Global Mirror relationships. This maximum configuration will consume all 512 MB of bitmap space for the I/O Group and allow no FlashCopy bitmap space.

8.7 Metro Mirror commands


For comprehensive details about Metro Mirror Commands, see IBM System Storage SAN Volume Controller Command-Line Interface Users Guide, GC27-2287. The command set for Metro Mirror contains two broad groups: 428
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

Commands to create, delete, and manipulate relationships and Consistency Groups Commands to cause state changes Where a configuration command affects more than one cluster, Metro Mirror performs the work to coordinate configuration activity between the clusters. Certain configuration commands can only be performed when the clusters are connected and fail with no effect when they are disconnected. Other configuration commands are permitted even though the clusters are disconnected. The state is reconciled automatically by Metro Mirror when the clusters become connected again. For any given command, with one exception, a single cluster actually receives the command from the administrator. This design is significant for defining the context for a CreateRelationship mkrcrelationship or CreateConsistencyGroup mkrcconsistgrp command, in which case, the cluster receiving the command is called the local cluster. The exception mentioned previously is the command that sets clusters into a Metro Mirror partnership. The mkpartnership command must be issued to both the local and remote clusters. The commands here are described as an abstract command set and are implemented as either method: A command-line interface (CLI), which can be used for scripting and automation A graphical user interface (GUI), which can be used for one-off tasks

8.7.1 Listing available SVC cluster partners


To create an SVC cluster partnership, use the svcinfo lsclustercandidate command.

svcinfo lsclustercandidate
The svcinfo lsclustercandidate command is used to list the clusters that are available for setting up a two-cluster partnership. This command is a prerequisite for creating Metro Mirror relationships.

8.7.2 Creating the SVC cluster partnership


To create an SVC cluster partnership, use the svctask mkpartnership command.

svctask mkpartnership
The svctask mkpartnership command is used to establish a one-way Metro Mirror partnership between the local cluster and a remote cluster. To establish a fully functional Metro Mirror partnership, you must issue this command to both clusters. This step is a prerequisite to creating Metro Mirror relationships between volumes on the SVC clusters. When creating the partnership, you can specify the bandwidth to be used by the background copy process between the local and the remote SVC cluster, and if it is not specified, the bandwidth defaults to 50 MBps. The bandwidth must be set to a value that is less than or equal to the bandwidth that can be sustained by the intercluster link.

Chapter 8. Advanced Copy Services

429

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

Background copy bandwidth effect on foreground I/O latency


The background copy bandwidth determines the rate at which the background copy for the SVC will be attempted. The background copy bandwidth can affect the foreground I/O latency in one of three ways: The following results can occur if the background copy bandwidth is set too high for the Metro Mirror intercluster link capacity: The background copy I/Os can back up on the Metro Mirror intercluster link. There is a delay in the synchronous auxiliary writes of foreground I/Os. The foreground I/O latency will increase as perceived by applications. If the background copy bandwidth is set too high for the storage at the primary site, the background copy read I/Os overload the master storage and delay foreground I/Os. If the background copy bandwidth is set too high for the storage at the secondary site, background copy writes at the auxiliary overload the secondary storage and again delay the synchronous auxiliary writes of foreground I/Os. To set the background copy bandwidth optimally, make sure that you consider all three resources (the master storage, the intercluster link bandwidth, and the secondary storage). Provision the most restrictive of these three resources between the background copy bandwidth and the peak foreground I/O workload. This provisioning can be done by a calculation (as previously described) or alternatively by determining experimentally how much background copy can be allowed before the foreground I/O latency becomes unacceptable, and then backing off to allow for peaks in workload and a safety margin.

svctask chpartnership
In case it is needed to change the bandwidth that is available for background copy in an SVC cluster partnership, you can use the svctask chpartnership command to specify the new bandwidth.

8.7.3 Creating a Metro Mirror Consistency Group


To create a Metro Mirror Consistency Group, use the svctask mkrcconsistgrp command.

svctask mkrcconsistgrp
The svctask mkrcconsistgrp command is used to create a new empty Metro Mirror Consistency Group. The Metro Mirror Consistency Group name must be unique across all of the Consistency Groups that are known to the clusters owning this Consistency Group. If the Consistency Group involves two clusters, the clusters must be in communication throughout the creation process. The new Consistency Group does not contain any relationships and will be in the Empty state. Metro Mirror relationships can be added to the group either upon creation or afterward by using the svctask chrelationship command.

8.7.4 Creating a Metro Mirror relationship


To create a Metro Mirror relationship, use the command svctask mkrcrelationship.

430

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

svctask mkrcrelationship
The svctask mkrcrelationship command is used to create a new Metro Mirror relationship. This relationship persists until it is deleted. The auxiliary volume must be equal in size to the master volume or the command will fail, and if both volumes are in the same cluster, they must both be in the same I/O Group. The master and auxiliary volume cannot be in an existing relationship and cannot be the target of a FlashCopy mapping. This command returns the new relationship (relationship_id) when successful. When creating the Metro Mirror relationship, it can be added to an already existing Consistency Group, or it can be a stand-alone Metro Mirror relationship if no Consistency Group is specified. To check whether the master or auxiliary volumes comply with the prerequisites to participate in a Metro Mirror relationship, use the svcinfo lsrcrelationshipcandidate command.

svcinfo lsrcrelationshipcandidate
The svcinfo lsrcrelationshipcandidate command is used to list available volumes that are eligible for a Metro Mirror relationship. When issuing the command, you can specify the source volume name and secondary cluster to list candidates that comply with prerequisites to create a Metro Mirror relationship. If the command is issued with no flags, all volumes that are not disallowed by another configuration state, such as being a FlashCopy target, are listed.

8.7.5 Changing a Metro Mirror relationship


To modify the properties of a Metro Mirror relationship, use the command svctask chrcrelationship.

svctask chrcrelationship
The svctask chrcrelationship command is used to modify the following properties of a Metro Mirror relationship: Change the name of a Metro Mirror relationship. Add a relationship to a group. Remove a relationship from a group using the -force flag. Adding a Metro Mirror relationship: When adding a Metro Mirror relationship to a Consistency Group that is not empty, the relationship must have the same state and copy direction as the group to be added to it.

8.7.6 Changing a Metro Mirror Consistency Group


To change the name of a Metro Mirror Consistency Group, use the svctask chrcconsistgrp command.

svctask chrcconsistgrp
The svctask chrcconsistgrp command is used to change the name of a Metro Mirror Consistency Group.

Chapter 8. Advanced Copy Services

431

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

8.7.7 Starting a Metro Mirror relationship


To start a stand-alone Metro Mirror relationship, use the svctask startrcrelationship command.

svctask startrcrelationship
The svctask startrcrelationship command is used to start the copy process of a Metro Mirror relationship. When issuing the command, the copy direction can be set, if it is undefined, and optionally mark the auxiliary volume of the relationship as clean. The command fails if it is used to attempt to start a relationship that is part of a Consistency Group. This command can only be issued to a relationship that is connected. For a relationship that is idling, this command assigns a copy direction (master and auxiliary roles) and begins the copy process. Otherwise, this command restarts a previous copy process that was stopped either by a stop command or by an I/O error. If the resumption of the copy process leads to a period when the relationship is inconsistent, you must specify the -force flag when restarting the relationship. This situation can arise if, for example, the relationship was stopped, and then further writes were performed on the original master of the relationship. The use of the -force flag here is a reminder that the data on the auxiliary will become inconsistent while resynchronization (background copying) occurs, and therefore, the data is not usable for DR purposes before the background copy has completed. In the Idling state, you must specify the master volume to indicate the copy direction. In other connected states, you can provide the -primary argument, but it must match the existing setting.

8.7.8 Stopping a Metro Mirror relationship


To stop a stand-alone Metro Mirror relationship, use the svctask stoprcrelationship command.

svctask stoprcrelationship
The svctask stoprcrelationship command is used to stop the copy process for a relationship. It can also be used to enable write access to a consistent auxiliary volume by specifying the -access flag. This command applies to a stand-alone relationship. It is rejected if it is addressed to a relationship that is part of a Consistency Group. You can issue this command to stop a relationship that is copying from master to auxiliary. If the relationship is in an inconsistent state, any copy operation stops and does not resume until you issue an svctask startrcrelationship command. Write activity is no longer copied from the master to the auxiliary volume. For a relationship in the ConsistentSynchronized state, this command causes a consistency freeze. When a relationship is in a consistent state (that is, in the ConsistentStopped, ConsistentSynchronized, or ConsistentDisconnected state), you can use the -access parameter with the stoprcrelationship command to enable write access to the auxiliary volume.

432

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

8.7.9 Starting a Metro Mirror Consistency Group


To start a Metro Mirror Consistency Group, use the svctask startrcconsistgrp command. The svctask startrcconsistgrp command is used to start a Metro Mirror Consistency Group. This command can only be issued to a Consistency Group that is connected. For a Consistency Group that is idling, this command assigns a copy direction (master and auxiliary roles) and begins the copy process. Otherwise, this command restarts a previous copy process that was stopped either by a stop command or by an I/O error.

8.7.10 Stopping a Metro Mirror Consistency Group


To stop a Metro Mirror Consistency Group, use the svctask stoprcconsistgrp command.

svctask stoprcconsistgrp
The svctask startrcconsistgrp command is used to stop the copy process for a Metro Mirror Consistency Group. It can also be used to enable write access to the auxiliary volumes in the group if the group is in a consistent state. If the Consistency Group is in an inconsistent state, any copy operation stops and does not resume until you issue the svctask startrcconsistgrp command. Write activity is no longer copied from the master to the auxiliary volumes belonging to the relationships in the group. For a Consistency Group in the ConsistentSynchronized state, this command causes a consistency freeze. When a Consistency Group is in a consistent state (for example, in the ConsistentStopped, ConsistentSynchronized, or ConsistentDisconnected state), the -access argument can be used with the svctask stoprcconsistgrp command to enable write access to the auxiliary volumes within that group.

8.7.11 Deleting a Metro Mirror relationship


To delete a Metro Mirror relationship, use the svctask rmrcrelationship command.

svctask rmrcrelationship
The svctask rmrcrelationship command is used to delete the relationship that is specified. Deleting a relationship only deletes the logical relationship between the two volumes. It does not affect the volumes themselves. If the relationship is disconnected at the time that the command is issued, the relationship is only deleted on the cluster on which the command is being run. When the clusters reconnect, then the relationship is automatically deleted on the other cluster. Alternatively, if the clusters are disconnected, and you still want to remove the relationship on both clusters, you can issue the rmrcrelationship command independently on both of the clusters. If you delete an inconsistent relationship, the auxiliary volume becomes accessible even though it is still inconsistent. This situation is the one case in which Metro Mirror does not inhibit access to inconsistent data.

Chapter 8. Advanced Copy Services

433

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

8.7.12 Deleting a Metro Mirror Consistency Group


To delete a Metro Mirror Consistency Group, use the svctask rmrcconsistgrp command.

svctask rmrcconsistgrp
The svctask rmrcconsistgrp command is used to delete a Metro Mirror Consistency Group. This command deletes the specified Consistency Group. You can issue this command for any existing Consistency Group. If the Consistency Group is disconnected at the time that the command is issued, the Consistency Group is only deleted on the cluster on which the command is being run. When the clusters reconnect, the Consistency Group is automatically deleted on the other cluster. Alternatively, if the clusters are disconnected, and you still want to remove the Consistency Group on both clusters, you can issue the svctask rmrcconsistgrp command separately on both of the clusters. If the Consistency Group is not empty, the relationships within it are removed from the Consistency Group before the group is deleted. These relationships then become stand-alone relationships. The state of these relationships is not changed by the action of removing them from the Consistency Group.

8.7.13 Reversing a Metro Mirror relationship


To reverse a Metro Mirror relationship, use the svctask switchrcrelationship command.

svctask switchrcrelationship
The svctask switchrcrelationship command is used to reverse the roles of the master and auxiliary volumes when a stand-alone relationship is in a consistent state. When issuing the command, the desired master is specified.

8.7.14 Reversing a Metro Mirror Consistency Group


To reverse a Metro Mirror Consistency Group, use the svctask switchrcconsistgrp command.

svctask switchrcconsistgrp
The svctask switchrcconsistgrp command is used to reverse the roles of the master and auxiliary volumes when a Consistency Group is in a consistent state. This change is applied to all of the relationships in the Consistency Group, and when issuing the command, the desired master is specified.

8.7.15 Background copy


Metro Mirror paces the rate at which background copy is performed by the appropriate relationships. Background copy takes place on relationships that are in the InconsistentCopying state with a status of Online. The quota of background copy (configured on the intercluster link) is divided evenly between the nodes that are performing background copy for one of the eligible relationships. This allocation is made without regard for the number of disks for which the node is responsible. Each node in turn divides its allocation evenly between the multiple relationships performing a background copy. For intracluster relationships, each node is assigned a static quota of 25 MBps.

434

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

Note: The SVC partnership bandwidth limit is specified in megabytes per second and only applies during initial copy or resynchronization. This number is independent of whatever transport method you are using to get data between locations.

8.8 Global Mirror


In the following topics, we describe the Global Mirror copy service, which is an asynchronous remote copy service. It provides and maintains a consistent mirrored copy of a source volume to a target volume. Global Mirror establishes a Global Mirror relationship between two volumes of equal size. The volumes in a Global Mirror relationship are referred to as the master (source) volume and the auxiliary (target) volume, same as with Metro Mirror. Consistency Groups can be used to maintain data integrity for dependent writes, similar to FlashCopy Consistency Groups. Global Mirror writes data to the auxiliary volume asynchronously, meaning that host writes to the master volume will provide the host with confirmation that the write is complete prior to the I/O completing on the auxiliary volume.

8.8.1 Intracluster Global Mirror


Although Global Mirror is available for intracluster, it has no functional value for production use. Intracluster Metro Mirror provides the same capability with less overhead. However, leaving this functionality in place simplifies testing and allows for client experimentation and testing (for example, to validate server failover on a single test cluster). Note that like Intracluster Metro Mirror, you will need to take into consideration the increase in the license requirement, as both source and target will exist on the same SVC Cluster.

8.8.2 Intercluster Global Mirror


Intercluster Global Mirror operations require a pair of SVC clusters connected by a number of intercluster links. The two SVC clusters must be defined in an SVC cluster partnership to establish a fully functional Global Mirror relationship. Limit: When a local and a remote fabric are connected together for Global Mirror purposes, the ISL hop count between a local node and a remote node must not exceed seven hops.

8.8.3 Asynchronous remote copy


Global Mirror is an asynchronous remote copy technique. In asynchronous remote copy, write operations are completed on the primary site and the write acknowledgement is sent to the host before it is received at the secondary site. An update of this write operation is sent to the secondary site at a later stage, which provides the capability to perform remote copy over distances exceeding the limitations of synchronous remote copy. The Global Mirror function provides the same function as Metro Mirror Remote Copy, but over long distance links with higher latency, without requiring the hosts to wait for the full round-trip delay of the long distance link.

Chapter 8. Advanced Copy Services

435

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 8-33 shows that a write operation to the master volume is acknowledged back to the host issuing the write before the write operation is mirrored to the cache for the auxiliary volume.

Figure 8-33 Global Mirror write sequence

The Global Mirror algorithms maintain a consistent image on the auxiliary at all times. They achieve this consistent image by identifying sets of I/Os that are active concurrently at the master, assigning an order to those sets, and applying those sets of I/Os in the assigned order at the secondary. As a result, Global Mirror maintains the features of Write Ordering and Read Stability that are described in this chapter. The multiple I/Os within a single set are applied concurrently. The process that marshals the sequential sets of I/Os operates at the secondary cluster, and is therefore not subject to the latency of the long distance link. These two elements of the protocol ensure that the throughput of the total cluster can be grown by increasing cluster size, while maintaining consistency across a growing data set. In a failover scenario, where the secondary site needs to become the master source of data, certain updates might be missing at the secondary site. Therefore, any applications that will use this data must have an external mechanism for recovering the missing updates and reapplying them, for example, such as a transaction log replay.

8.8.4 SVC Global Mirror features


SVC Global Mirror supports the following features: Asynchronous remote copy of volumes dispersed over metropolitan scale distances is supported. SVC implements the Global Mirror relationship between a volume pair, with each volume in the pair being managed by an SVC cluster. SVC supports intracluster Global Mirror, where both volumes belong to the same cluster (and I/O Group). Although, as stated earlier, this functionality is better suited to Metro Mirror.

436

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

SVC supports intercluster Global Mirror, where each volume belongs to its separate SVC cluster. A given SVC cluster can be configured for partnership with between one and three other clusters. Intercluster and intracluster Global Mirror can be used concurrently within a cluster for separate relationships. SVC does not require a control network or fabric to be installed to manage Global Mirror. For intercluster Global Mirror, the SVC maintains a control link between the two clusters. This control link is used to control the state and to coordinate the updates at either end. The control link is implemented on top of the same FC fabric connection that the SVC uses for Global Mirror I/O. SVC implements a configuration model that maintains the Global Mirror configuration and state through major events, such as failover, recovery, and resynchronization, to minimize user configuration action through these events. SVC implements flexible resynchronization support, enabling it to resynchronize volume pairs that have experienced write I/Os to both disks and to resynchronize only those regions that are known to have changed. An optional feature for Global Mirror permits a delay simulation to be applied on writes that are sent to auxiliary volumes. As of 6.3.0, Global Mirror source and target volumes maybe associated with change volumes.

Colliding writes
Prior to V4.3.1, the Global Mirror algorithm required that only a single write is active on any given 512 byte LBA of a volume. If a further write is received from a host while the auxiliary write is still active, even though the master write might have completed, the new host write will be delayed until the auxiliary write is complete. This restriction is needed in case a series of writes to the auxiliary have to be retried (called reconstruction). Conceptually, the data for reconstruction comes from the master volume. If multiple writes are allowed to be applied to the master for a given sector, only the most recent write will get the correct data during reconstruction, and if reconstruction is interrupted for any reason, the intermediate state of the auxiliary is inconsistent. Applications that deliver such write activity will not achieve the performance that Global Mirror is intended to support. A volume statistic is maintained about the frequency of these collisions. From V4.3.1 onward, an attempt is made to allow multiple writes to a single location to be outstanding in the Global Mirror algorithm. There is still a need for master writes to be serialized, and the intermediate states of the master data must be kept in a non-volatile journal while the writes are outstanding to maintain the correct write ordering during reconstruction. Reconstruction must never overwrite data on the auxiliary with an earlier version. The volume statistic monitoring colliding writes is now limited to those writes that are not affected by this change. Figure 8-34 on page 438 shows a colliding write sequence example.

Chapter 8. Advanced Copy Services

437

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 8-34 Colliding writes example

These numbers correspond to the numbers in Figure 8-34: (1) First write is performed from the host to LBA X. (2) Host is provided acknowledgment that the write it complete even though the mirrored write to the auxiliary volume has note yet completed. (1) and (2) occur asynchronously with the first write. (3) Second write is performed from host also to LBA X, if this write occurs prior to (2) the write will be written to the journal file. (4) Host is provided acknowledgment that the second write is complete.

Delay simulation
An optional feature for Global Mirror permits a delay simulation to be applied on writes that are sent to auxiliary volumes. This feature allows testing to be performed that detects colliding writes, and therefore, this feature can be used to test an application before the full deployment of the feature. The feature can be enabled separately for each of the intracluster or intercluster Global Mirrors. You specify the delay setting by using the chcluster command and viewed by using the lscluster command. The gm_intra_delay_simulation field expresses the amount of time that intracluster auxiliary I/Os are delayed. The gm_inter_delay_simulation field expresses the amount of time that intercluster auxiliary I/Os are delayed. A value of zero (0) disables the feature. Tip: If you are experiencing repeated problems with the delay on your link, make sure the delay simulator was properly disabled.

Multiple Cluster Mirroring


The rules for a Global Mirror Multiple Cluster Mirroring environment are the same as the rules in an Metro Mirror environment; see 8.6.4, Multiple Cluster Mirroring on page 413.

8.8.5 Global Mirror relationship between master and auxiliary volumes


When creating a Global Mirror relationship, the master volume is initially assigned as the master, and the auxiliary volume is initially assigned as the auxiliary. This design implies that the initial copy direction is mirroring the master volume to the auxiliary volume. After the initial synchronization is complete, the copy direction can be changed, if appropriate. 438
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

In the most common applications of Global Mirror, the master volume contains the production copy of the data and is used by the host application. The auxiliary volume contains the mirrored copy of the data and is used for failover in DR scenarios. Due to the nature of consistency requirements and SCSI protocol standards, the auxiliary or target volume cannot be actively in use while the Global Mirror relationship is actively copying data. Notes: A volume can only be part of one Global Mirror relationship at a time. As of SVC 6.2.0.0, a volume that is a FlashCopy target can be part of a Global Mirror relationship.

8.8.6 Using Change Volumes with Global Mirror


Global Mirror is designed to achieve an RPO as low as possible, so that data is as up-to-date as possible. This places some strict requirements on your infrastructure and in certain situations, with low network link quality or congested or overloaded hosts, you maybe impacted by multiple 1920 (congestion errors.) Congestion errors happen in three primary situations: 1. Congestion at the source site via host or network. 2. Congestion on the network link or network path. 3. Congestion at the target site via host or network. With 6.3.0, Global Mirror receives new functionality designed to address a few conditions which negatively impact some Global Mirror implementations: Estimation of Bandwidth requirements tends to be complex. It is often difficult to guarantee the latency and bandwidth requirements can be met. Congested hosts on either the source or target site can cause disruption. Congested network links can cause disruption with only intermittent peaks. In order to address these issues, Change Volumes have been added as an option for Global Mirror relationships. Change Volumes leverage the FlashCopy functionality, but cannot be manipulated as FlashCopy volumes, as they are special purpose only. Change Volumes provide the ability to replicate point-in-time images on a cycling period (default 300 seconds.) This means that your change rate will only need to include the condition of the data at the point-in-time the image was taken, instead of all the updates during the period. This can provide significant reductions in replication volume. In Figure 8-35 is a diagram of a basic Global Mirror relationship without change volumes.

Chapter 8. Advanced Copy Services

439

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 8-35 Global Mirror without Change Volumes

With Change Volumes, this looks like Figure 8-36

Figure 8-36 Global Mirror with Change Volumes

With Change Volumes, a FlashCopy mapping exists between the primary volume and the primary Change Volume. The mapping is updated on the cycling period (60 seconds to 1 Day.) The primary Change Volume is then replicated to the secondary Global Mirror volume at the target site, which is then captured in another change volume on the target site. This provides an always consistent image at the target site and protects your data from being inconsistent during resynchronization. Lets take a closer look at how change volumes might save you replication traffic.

440

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

Figure 8-37 Global Mirror IO replication without Change Volumes

In Figure 8-37 you can see a number of IOs on the Source and the same number on the Target, and in the same order. Assuming this is the same set of data being updated over and over, then this is wasted network traffic and the IO can be completed much more efficiently as shown in Figure 8-38

Figure 8-38 Global Mirror IO with Change Volumes

In Figure 8-38 the same data is being updated repeatedly, so Change Volumes demonstrate significant IO transmission savings, by needed to only send IO number 16, which was the last IO before the cycling period. The cycling period can be adjusted with the chrcrelationship -cycleperiodseconds <60-86400> command from the CLI. If a copy does not complete in the cycle period, the next cycle will not start until the prior one has completed. It is for this reason that using change volumes gives you two possibilities for RPO. 1. If your replication completes in the cycling period, then your RPO is twice the cycling period. 2. If your replication does not complete within the cycling period, then your RPO is twice the completion time. The next cycling period will start immediately after the prior one is finished. Careful consideration should be put in your business requirements versus the performance of Global Mirror with Change Volumes. Global mirror with change volumes does increase the inter-cluster traffic for more frequent cycling periods. So going as short as possible isnt always the answer. In most cases the default should meet requirements and perform reasonably well.

Chapter 8. Advanced Copy Services

441

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

Note: When making your Global Mirror volumes with Change Volumes, make sure that you remember to select the Change Volume on the auxiliary (target) site. Failure to do so will leave you exposed during a resynchronization operation.

Important: The GUI for 6.3.0 will automatically create Change Volumes for you. However, it is a limitation of the this initial release that they are fully-provisioned volumes. In order to save space you should create thin-provisioned volumes before hand and use the existing volume option for selecting your change volumes.

8.8.7 Importance of write ordering


Many applications that use block storage have a requirement to survive failures, such as loss of power or a software crash, and to not lose data that existed prior to the failure. Because many applications must perform large numbers of update operations in parallel to that block storage, maintaining write ordering is key to ensuring the correct operation of applications following a disruption. An application that performs a high volume of database updates is usually designed with the concept of dependent writes. With dependent writes, it is important to ensure that an earlier write has completed before a later write is started. Reversing or performing the order of writes differently than the application intended can undermine the applications algorithms and can lead to problems, such as detected or undetected data corruption. The SVC Global Mirror Implementation operates in a manner that is designed to keep a consistent image at the secondary site at all times. This is accomplish via some very complex algorithms that operate to identify sets of data and numbering those sets of data in sequence. The data is then applied at the secondary site in the defined sequence. Operating in this manner ensures that as long as the relationship is in a consistent_synchronized state, your Global Mirror target data will be at least crash consistent and allow for quick recovery via your application crash recovery facilities. See 8.4.3, Consistency Groups on page 383 for more information regarding dependent writes.

8.8.8 Global Mirror Consistency Groups


Global Mirror Consistency Groups address the issue of dependent writes across volumes, where the objective is to preserve data consistency across multiple Global Mirrored volumes. Consistency Groups ensure a consistent data set, because applications have relational data spanning across multiple volumes. A Global Mirror Consistency Group can contain an arbitrary number of relationships up to the maximum number of Global Mirror relationships that is supported by the SVC cluster. Global Mirror commands can be issued to a Global Mirror Consistency Group, and thereby simultaneously for all Global Mirror relationships that are defined within that Consistency Group, or to a single Metro Mirror relationship, if not part of a Global Mirror Consistency Group. For example, when issuing a Global Mirror start command to the Consistency Group, all of the Global Mirror relationships in the Consistency Group are started at the same time.

442

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

Figure 8-39 on page 443 illustrates the concept of Global Mirror Consistency Groups. Because GM_Relationship 1 and GM_Relationship 2 are part of the Consistency Group, they can be handled as one entity. The stand-alone GM_Relationship 3 is handled separately.

Figure 8-39 Global Mirror Consistency Group

Certain uses of Global Mirror require the manipulation of more than one relationship. Global Mirror Consistency Groups can provide the ability to group relationships so that they are manipulated in unison. Global Mirror relationships within a Consistency Group can be in any form: Global Mirror relationships can be part of a Consistency Group, or be stand-alone and therefore handled as single instances. A Consistency Group can contain zero (0) or more relationships. An empty Consistency Group, with zero relationships in it, has little purpose until it is assigned its first relationship, except that it has a name. All of the relationships in a Consistency Group must have matching master and auxiliary volumes. Although it is possible to use Consistency Groups to manipulate sets of relationships that do not need to satisfy these strict rules, such manipulation can lead to undesired side effects. The rules behind a Consistency Group mean that certain configuration commands are prohibited. These specific configuration commands are not prohibited if the relationship is not part of a Consistency Group. For example, consider the case of two applications that are completely independent, yet they are placed into a single Consistency Group. If a loss of synchronization were to occur, and a background copy process is required to recover synchronization, then while this process is in progress Global Mirror rejects attempts to enable access to the auxiliary volumes of either application.

Chapter 8. Advanced Copy Services

443

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

If one application finishes its background copy before the other, Global Mirror still refuses to grant access to its auxiliary volume. Even though it is safe in this case, Global Mirror policy refuses access to the entire Consistency Group if any part of it is inconsistent. Stand-alone relationships and Consistency Groups share a common configuration and state model. All of the relationships in a Consistency Group that is not empty have the same state as the Consistency Group.

8.8.9 Distribution of work among nodes


For best performance Global Mirror volumes should have their preferred nodes evenly distributed among the nodes of the clusters. Each volume within an I/O Group has a preferred node property that can be used to balance the I/O load between nodes in that group. Global Mirror also uses this property to route I/O between clusters.

8.8.10 Background copy performance


Background copy resources for intercluster remote copy are available within two nodes of an I/O Group to perform background copy at a maximum of 200 MBps (each data read and data written) total. The background copy performance is subject to sufficient RAID controller bandwidth. Performance is also subject to other potential bottlenecks (such as the intercluster fabric) and possible contention from host I/O for the SVC bandwidth resources. Background copy I/O will be scheduled to avoid bursts of activity that might have an adverse effect on system behavior. An entire grain of tracks on one volume will be processed at around the same time but not as a single I/O. Double buffering is used to try to take advantage of sequential performance within a grain. However, the next grain within the volume might not be scheduled for a while. Multiple grains might be copied simultaneously and might be enough to satisfy the requested rate, unless the available resources cannot sustain the requested rate. Background copy proceeds from the low LBA to the high LBA in sequence to avoid convoying conflicts with FlashCopy, which operates in the opposite direction. It is expected that background copy will not convey conflict with sequential applications, because it tends to vary disks more often.

8.8.11 Thin-provisioned background copy


Metro Mirror and Global Mirror relationships will preserve the space-efficiency of the master. Conceptually, the background copy process detects an unallocated region of the master and sends a special zero buffer to the auxiliary. If the auxiliary volume is thin-provisioned, and the region is unallocated, the special buffer prevents a write (and, therefore, an allocation). If the auxiliary volume is not thin-provisioned, or the region in question is an allocated region of a thin-provisioned volume, a buffer of real zeros is synthesized on the auxiliary and written as normal.

8.9 Global Mirror process


There are several steps in the Global Mirror process: 1. An SVC cluster partnership is created between two SVC clusters (for intercluster Global Mirror). 2. A Global Mirror relationship is created between two volumes of the same size. 444
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

3. To manage multiple Global Mirror relationships as one entity, the relationships can be made part of a Global Mirror Consistency Group to ensure data consistency across multiple Global Mirror relationships, or simply for ease of management. 4. The Global Mirror relationship is started, and when the background copy has completed, the relationship is consistent and synchronized. 5. When synchronized, the auxiliary volume holds a copy of the production data at the master that can be used for DR. 6. To access the auxiliary volume, the Global Mirror relationship must be stopped with the access option enabled, before write I/O is submitted to the auxiliary. 7. The remote host server is mapped to the auxiliary volume, and the disk is available for I/O.

8.9.1 Methods of synchronization


This section describes two methods that can be used to establish a relationship.

Full synchronization after creation


Full synchronization after creation is the default method. It is the simplest method, and it requires no administrative activity apart from issuing the necessary commands. However, in certain environments, the bandwidth that is available makes this method unsuitable. Use this sequence for a single relationship: A new relationship is created (mkrcrelationship is issued) without specifying the -sync flag. A new relationship is started (startrcrelationship is issued) without the -clean flag.

Synchronized before creation


In this method, the administrator must ensure that the master and auxiliary volumes contain identical data before creating the relationship. There are two ways to ensure that the master and auxiliary volumes contain identical data: Both disks are created with the security delete (-fmtdisk) feature to make all data zero. A complete tape image (or other method of moving data) is copied from one disk to the other disk. With this technique, do not allow I/O on either the master or auxiliary before the relationship is established. Then, the administrator must ensure that commands are issued: A new relationship is created (mkrcrelationship is issued) with the -sync flag. A new relationship is started (startrcrelationship is issued) without the -clean flag. Attention: Failure to perform these steps correctly can cause Global Mirror to report the relationship as consistent when it is not, thereby creating a data loss or data integrity exposure for hosts accessing data on the auxiliary volume.

8.9.2 Global Mirror states and events


In this section, we explain the states of a Global Mirror relationship and the series of events that modify these states. Figure 8-40 shows an overview of the states that apply to a Global Mirror relationship in the connected state.
Chapter 8. Advanced Copy Services

445

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 8-40 Global Mirror state diagram

When creating the Global Mirror relationship, you can specify whether the auxiliary volume is already in sync with the master volume, and the background copy process is then skipped. This capability is especially useful when creating Global Mirror relationships for volumes that have been created with the format option. The following steps explain the Global Mirror state diagram (these numbers correspond to the numbers in Figure 8-40 on page 446): Step 1 a. The Global Mirror relationship is created with the -sync option, and the Global Mirror relationship enters the ConsistentStopped state. b. The Global Mirror relationship is created without specifying that the master and auxiliary volumes are in sync, and the Global Mirror relationship enters the InconsistentStopped state. Step 2 a. When starting a Global Mirror relationship in the ConsistentStopped state, it enters the ConsistentSynchronized state. This state implies that no updates (write I/O) have been performed on the master volume while in the ConsistentStopped state. Otherwise, you must specify the -force option, and the Global Mirror relationship then enters the InconsistentCopying state while the background copy is started. b. When starting a Global Mirror relationship in the InconsistentStopped state, it enters the InconsistentCopying state while the background copy is started.

446

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

Step 3 a. When the background copy completes, the Global Mirror relationship transitions from the InconsistentCopying state to the ConsistentSynchronized state. Step 4 a. When stopping a Global Mirror relationship in the ConsistentSynchronized state, where specifying the -access option enables write I/O on the auxiliary volume, the Global Mirror relationship enters the Idling state. b. To enable write I/O on the auxiliary volume, when the Global Mirror relationship is in the ConsistentStopped state, issue the command svctask stoprcrelationship, specifying the -access option, and the Global Mirror relationship enters the Idling state. Step 5 a. When starting a Global Mirror relationship that is in the Idling state, you must specify the -primary argument to set the copy direction. Because no write I/O has been performed (to either the master or auxiliary volume) while in the Idling state, the Global Mirror relationship enters the ConsistentSynchronized state. b. In case write I/O has been performed to either the master or the auxiliary volume, then you must specify the -force option. The Global Mirror relationship then enters the InconsistentCopying state, while the background copy is started. If the Global Mirror relationship is intentionally stopped or experiences an error, a state transition is applied. For example, Global Mirror relationships in the ConsistentSynchronized state enter the ConsistentStopped state, and Global Mirror relationships in the InconsistentCopying state enter the InconsistentStopped state. In a case where the connection is broken between the SVC clusters in a partnership, all of the (intercluster) Global Mirror relationships enter a Disconnected state. For further information, refer to Connected versus disconnected on page 447. Common configuration and state model: Stand-alone relationships and Consistency Groups share a common configuration and state model. All of the Global Mirror relationships in a Consistency Group that is not empty have the same state as the Consistency Group.

State overview
The SVC defined concepts of state are key to understanding the configuration concepts. We explain them in more detail here.

Connected versus disconnected


This distinction can arise when a Global Mirror relationship is created with the two volumes in separate clusters. Under certain error scenarios, communications between the two clusters might be lost. For example, power might fail, causing one complete cluster to disappear. Alternatively, the fabric connection between the two clusters might fail, leaving the two clusters running but unable to communicate with each other. When the two clusters can communicate, the clusters and the relationships spanning them are described as connected. When they cannot communicate, the clusters and the relationships spanning them are described as disconnected.

Chapter 8. Advanced Copy Services

447

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

In this scenario, each cluster is left with half of the relationship, and each cluster has only a portion of the information that was available to it before. Only a subset of the normal configuration activity is available. The disconnected relationships are portrayed as having a changed state. The new states describe what is known about the relationship and which configuration commands are permitted. When the clusters can communicate again, the relationships become connected again. Global Mirror automatically reconciles the two state fragments, taking into account any configuration activity or other event that took place while the relationship was disconnected. As a result, the relationship can either return to the state that it was in when it became disconnected or it can enter another connected state. Relationships that are configured between volumes in the same SVC cluster (intracluster) will never be described as being in a disconnected state.

Consistent versus inconsistent


Relationships or Consistency Groups that contain relationships can be described as being consistent or inconsistent. The consistent or inconsistent property describes the state of the data on the auxiliary volume in relation to the data on the master volume. Consider the consistent or inconsistent property to be a property of the auxiliary volume. An auxiliary volume is described as consistent if it contains data that might have been read by a host system from the master if power had failed at an imaginary point in time while I/O was in progress, and power was later restored. This imaginary point in time is defined as the recovery point. The requirements for consistency are expressed with respect to activity at the master up to the recovery point: The auxiliary volume contains the data from all writes to the master for which the host had received successful completion and that data has not been overwritten by a subsequent write (before the recovery point). The writes are on the auxiliary and the host did not receive successful completion for these writes (that is, the host received bad completion or no completion at all), and the host subsequently performed a read from the master of that data. If that read returned successful completion and no later write was sent (before the recovery point), the auxiliary contains the same data as the data that was returned by the read from the master. From the point of view of an application, consistency means that a auxiliary volume contains the same data as the master volume at the recovery point (the time at which the imaginary power failure occurred). If an application is designed to cope with an unexpected power failure, this guarantee of consistency means that the application will be able to use the auxiliary and begin operation just as though it had been restarted after the hypothetical power failure. Again, the application is dependent on the key properties of consistency: Write ordering Read stability for correct operation at the auxiliary If a relationship, or a set of relationships, is inconsistent and if an attempt is made to start an application using the data in the secondaries, a number of outcomes are possible: The application might decide that the data is corrupt and crash or exit with an error code. The application might fail to detect that the data is corrupt and return erroneous data. The application might work without a problem.

448

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

Because of the risk of data corruption, and, in particular, undetected data corruption, Global Mirror strongly enforces the concept of consistency and prohibits access to inconsistent data. You can apply consistency as a concept to a single relationship or to a set of relationships in a Consistency Group. Write ordering is a concept that an application can maintain across a number of disks that are accessed through multiple systems, and therefore, consistency must operate across all of those disks. When deciding how to use Consistency Groups, the administrator must consider the scope of an applications data, taking into account all of the interdependent systems that communicate and exchange information. If two programs or systems communicate and store details as a result of the information exchanged, either of the following actions might occur: All of the data that is accessed by the group of systems must be placed into a single Consistency Group. The systems must be recovered independently (each within its own Consistency Group). Then, each system must perform recovery with the other applications to become consistent with them.

Consistent versus synchronized


A copy that is consistent and up-to-date is described as synchronized. In a synchronized relationship, the master and auxiliary volumes only differ in the regions where writes are outstanding from the host. Consistency does not mean that the data is up-to-date. A copy can be consistent and yet contain data that was frozen at an earlier point in time. Write I/O might have continued to a master and not have been copied to the auxiliary. This state arises when it becomes impossible to keep up-to-date and maintain consistency. An example is a loss of communication between clusters when writing to the auxiliary. When communication is lost for an extended period of time, Global Mirror tracks the changes that occur on the master volumes, but not the order of these changes, or the details of these changes (write data). When communication is restored, it is impossible to make the auxiliary synchronized without sending write data to the auxiliary out of order and, therefore, losing consistency. You can use two policies to cope with this situation: Make a point-in-time copy of the consistent auxiliary before allowing the auxiliary to become inconsistent. In the event of a disaster, before consistency is achieved again, the point-in-time copy target provides a consistent, though out-of-date, image. Accept the loss of consistency, and the loss of a useful auxiliary, while making it synchronized.

Detailed states
The following sections detail the states that are portrayed to the user, for either Consistency Groups or relationships. It also details the extra information that is available in each state. We described the various major states to provide guidance regarding the available configuration commands.

InconsistentStopped
InconsistentStopped is a connected state. In this state, the master is accessible for read and write I/O, but the auxiliary is inaccessible for either read or write I/O. A copy process needs to be started to make the auxiliary consistent.

Chapter 8. Advanced Copy Services

449

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

This state is entered when the relationship or Consistency Group was InconsistentCopying and has either suffered a persistent error or received a stop command that has caused the copy process to stop. A start command causes the relationship or Consistency Group to move to the InconsistentCopying state. A stop command is accepted, but has no effect. If the relationship or Consistency Group becomes disconnected, the auxiliary side transitions to InconsistentDisconnected. The master side transitions to IdlingDisconnected.

InconsistentCopying
InconsistentCopying is a connected state. In this state, the master is accessible for read and write I/O, but the auxiliary is inaccessible for either read or write I/O. This state is entered after a start command is issued to an InconsistentStopped relationship or Consistency Group. It is also entered when a forced start is issued to an Idling or ConsistentStopped relationship or Consistency Group. In this state, a background copy process runs, which copies data from the master to the auxiliary volume. In the absence of errors, an InconsistentCopying relationship is active, and the copy progress increases until the copy process completes. In certain error situations, the copy progress might freeze or even regress. A persistent error or stop command places the relationship or Consistency Group into the InconsistentStopped state. A start command is accepted, but has no effect. If the background copy process completes on a stand-alone relationship, or on all relationships for a Consistency Group, the relationship or Consistency Group transitions to the ConsistentSynchronized state. If the relationship or Consistency Group becomes disconnected, the auxiliary side transitions to InconsistentDisconnected. The master side transitions to IdlingDisconnected.

ConsistentStopped
ConsistentStopped is a connected state. In this state, the auxiliary contains a consistent image, but it might be out-of-date with respect to the master. This state can arise when a relationship is in the ConsistentSynchronized state and experiences an error that forces a Consistency Freeze. It can also arise when a relationship is created with a CreateConsistentFlag set to true. Normally, following an I/O error, subsequent write activity causes updates to the master, and the auxiliary is no longer synchronized (set to false). In this case, to reestablish synchronization, consistency must be given up for a period. A start command with the -force option must be used to acknowledge this situation, and the relationship or Consistency Group transitions to InconsistentCopying. Issue this command only after all of the outstanding events are repaired. In the unusual case where the master and auxiliary are still synchronized (perhaps following a user stop, and no further write I/O was received), a start command takes the relationship to ConsistentSynchronized. No -force option is required. Also, in this unusual case, a switch command is permitted that moves the relationship or Consistency Group to ConsistentSynchronized and reverses the roles of the master and the auxiliary.

450

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

If the relationship or Consistency Group becomes disconnected, then the auxiliary side transitions to ConsistentDisconnected. The master side transitions to IdlingDisconnected. An informational status log is generated every time a relationship or Consistency Group enters the ConsistentStopped with a status of Online state. This can be configured to enable an SNMP trap and provide a trigger to automation software to consider issuing a start command following a loss of synchronization.

ConsistentSynchronized
This is a connected state. In this state, the master volume is accessible for read and write I/O. The auxiliary volume is accessible for read-only I/O. Writes that are sent to the master volume are sent to both master and auxiliary volumes. Either successful completion must be received for both writes; the write must be failed to the host; or a state must transition out of the ConsistentSynchronized state before a write is completed to the host. A stop command takes the relationship to the ConsistentStopped state. A stop command with the -access parameter takes the relationship to the Idling state. A switch command leaves the relationship in the ConsistentSynchronized state, but reverses the master and auxiliary roles. A start command is accepted, but has no effect. If the relationship or Consistency Group becomes disconnected, the same transitions are made as for ConsistentStopped.

Idling
Idling is a connected state. Both master and auxiliary disks are operating in the master role. Consequently, both master and auxiliary disks are accessible for write I/O. In this state, the relationship or Consistency Group accepts a start command. Global Mirror maintains a record of regions on each disk that received write I/O while Idling. This record is used to determine what areas need to be copied following a start command. The start command must specify the new copy direction. A start command can cause a loss of consistency if either volume in any relationship has received write I/O, which is indicated by the synchronized status. If the start command leads to loss of consistency, you must specify a -force parameter. Following a start command, the relationship or Consistency Group transitions to ConsistentSynchronized if there is no loss of consistency, or to InconsistentCopying if there is a loss of consistency. Also, while in this state, the relationship or Consistency Group accepts a -clean option on the start command. If the relationship or Consistency Group becomes disconnected, both sides change their state to IdlingDisconnected.

IdlingDisconnected
IdlingDisconnected is a disconnected state. The volume or disks in this half of the relationship or Consistency Group are all in the master role and accept read or write I/O. The major priority in this state is to recover the link and reconnect the relationship or Consistency Group.

Chapter 8. Advanced Copy Services

451

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

No configuration activity is possible (except for deletes or stops) until the relationship is reconnected. At that point, the relationship transitions to a connected state. The exact connected state that is entered depends on the state of the other half of the relationship or Consistency Group, which depends on these factors: The state when it became disconnected The write activity since it was disconnected The configuration activity since it was disconnected If both halves are IdlingDisconnected, the relationship becomes Idling when reconnected. While IdlingDisconnected, if a write I/O is received that causes loss of synchronization (synchronized attribute transitions from true to false) and the relationship was not already stopped (either through a user stop or a persistent error), an event is raised. This same event will also be raised when this condition occurs for the ConsistentSynchronized state.

InconsistentDisconnected
InconsistentDisconnected is a disconnected state. The volumes in this half of the relationship or Consistency Group are all in the auxiliary role and do not accept read or write I/O. No configuration activity, except for deletes, is permitted until the relationship reconnects. When the relationship or Consistency Group reconnects, the relationship becomes InconsistentCopying automatically unless either of these conditions exist: The relationship was InconsistentStopped when it became disconnected. The user issued a stop while disconnected. In either case, the relationship or Consistency Group becomes InconsistentStopped.

ConsistentDisconnected
ConsistentDisconnected is a disconnected state. The volumes in this half of the relationship or Consistency Group are all in the auxiliary role and accept read I/O but not write I/O. This state is entered from ConsistentSynchronized or ConsistentStopped when the auxiliary side of a relationship becomes disconnected. In this state, the relationship or Consistency Group displays an attribute of FreezeTime, which is the point in time that Consistency was frozen. When entered from ConsistentStopped, it retains the time that it had in that state. When entered from ConsistentSynchronized, the FreezeTime shows the last time at which the relationship or Consistency Group was known to be consistent. This time corresponds to the time of the last successful heartbeat to the other cluster. A stop command with the -access flag set to true transitions the relationship or Consistency Group to the IdlingDisconnected state. This state allows write I/O to be performed to the auxiliary volume and is used as part of a DR scenario. When the relationship or Consistency Group reconnects, the relationship or Consistency Group becomes ConsistentSynchronized only if this state does not lead to a loss of consistency. This is the case provided that these conditions are true: The relationship was ConsistentSynchronized when it became disconnected. No writes received successful completion at the master while disconnected. Otherwise, the relationship becomes ConsistentStopped. The FreezeTime setting is retained.

452

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

Empty
This state only applies to Consistency Groups. It is the state of a Consistency Group that has no relationships and no other state information to show. It is entered when a Consistency Group is first created. It is exited when the first relationship is added to the Consistency Group, at which point, the state of the relationship becomes the state of the Consistency Group.

8.9.3 Practical use of Global Mirror


Global Mirror establishes a Global Mirror relationship between two volumes of equal size. The volumes in a Global Mirror relationship are referred to as the master (primary) volume and the auxiliary (secondary) volume. The relationship between the two copies is asymmetric. The master volume is the production volume, and updates to this copy are mirrored to the auxiliary volume. The contents of the auxiliary volume that existed prior to the relationship is lost. Switching the copy direction: The copy direction for a Global Mirror relationship can be switched so the auxiliary volume becomes the master and the master volume becomes the auxiliary, much like the restore option for FlashCopy. While the Global Mirror relationship is active, the auxiliary copy (volume) is inaccessible for host application write I/O at any time. The SVC allows read-only access to the auxiliary volume when it contains a consistent image. This read-only access is only intended to allow boot time operating system discovery to complete without error, so that any hosts at the secondary site can be ready to start up the applications with minimal delay, if required. For example, many operating systems need to read logical block address (LBA) 0 (zero) to configure a logical unit. Although read access is allowed on the auxiliary, in practice the data on the auxiliary volumes cannot be read by a host, because most operating systems write a dirty bit to the file system when it is mounted. Because this write operation is not allowed on the auxiliary volume, the volume cannot be mounted. This access is only provided where consistency can be guaranteed. However, there is no way in which coherency can be maintained between reads that are performed at the auxiliary and later write I/Os that are performed at the master. To enable access to the auxiliary volume for host operations, you must stop the Global Mirror relationship by specifying the -access parameter. While access to the auxiliary volume for host operations is enabled, you must instruct the host to mount the volume and other related tasks, before the application can be started or instructed to perform a recovery process. Using an auxiliary copy demands a conscious policy decision by the administrator that a failover is required, and the tasks to be performed on the host that is involved in establishing operation on the auxiliary copy are substantial. The goal is to make this failover rapid (much faster than recovering from a backup copy), but it is not seamless. You can automate the failover process by using failover management software. The SVC provides Simple Network Management Protocol (SNMP) traps and programming (or scripting) for the command-line interface (CLI) to enable this automation. Table 8-7 on page 428 outlines the combinations of FlashCopy and Metro Mirror or Global Mirror functions that are valid for a volume.
Chapter 8. Advanced Copy Services

453

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

8.9.4 Global Mirror configuration limits


Table 8-9 lists the Global Mirror configuration limits.
Table 8-9 Global Mirror configuration limits Parameter Number of Metro Mirror Consistency Groups per cluster Number of Metro Mirror relationships per cluster Number of Metro Mirror relationships per Consistency Group Total volume size per I/O Group Value 256 8192 (based on maximum number of volumes per cluster) 8192

A per I/O Group limit of 1024 TB exists on the quantity of master and auxiliary volume address spaces that can participate in Metro Mirror and Global Mirror relationships. This maximum configuration will consume 512 MB of bitmap space for the I/O Group and allow 10 MB of space for all remaining copy services features.

8.10 Global Mirror commands


Here, we summarize several of the most important Global Mirror commands. For complete details about all of the Global Mirror commands, see IBM System Storage SAN Volume Controller: Command-Line Interface User's Guide, GC27-2287. The command set for Global Mirror contains two broad groups: Commands to create, delete, and manipulate relationships and Consistency Groups Commands that cause state changes Where a configuration command affects more than one cluster, Global Mirror performs the work to coordinate configuration activity between the clusters. Certain configuration commands can only be performed when the clusters are connected, and those commands fail with no effect when the clusters are disconnected. Other configuration commands are permitted even though the clusters are disconnected. The state is reconciled automatically by Global Mirror when the clusters are reconnected. For any given command, with one exception, a single cluster actually receives the command from the administrator. This action is significant for defining the context for a CreateRelationship (mkrcrelationship) command or a CreateConsistencyGroup (mkrcconsistgrp) command, in which case, the cluster receiving the command is called the local cluster. The exception is the command that sets clusters into a Global Mirror partnership. The administrator must issue the mkpartnership command to both the local and to the remote cluster. The commands are described here as an abstract command set. You can implement these commands in one of two ways: A command-line interface (CLI), which can be used for scripting and automation A graphical user interface (GUI), which can be used for one-off tasks

454

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

8.10.1 Listing the available SVC cluster partners


To create an SVC cluster partnership, we use the svcinfo lsclustercandidate command.

svcinfo lsclustercandidate
Use the svcinfo lsclustercandidate command to list the clusters that are available for setting up a two-cluster partnership. This command is a prerequisite for creating Global Mirror relationships. To display the characteristics of the cluster, use the svcinfo lscluster command, specifying the name of the cluster.

svctask chcluster
There are three parameters for Global Mirror in the command output: -gmlinktolerance link_tolerance This parameter specifies the maximum period of time that the system will tolerate delay before stopping Global Mirror relationships. Specify values between 60 and 86400 seconds in increments of 10 seconds. The default value is 300. Do not change this value except under the direction of IBM Support. -relationshipbandwidthlimit cluster_relationship_bandwidth_limit This parameter controls the maximum rate at which any one remote copy relationship can synchronize. The default value for the relationship bandwidth limit is 25 MBps, but this value can now be specified between 1 MBps to 1000 MBps. Note that the partnership overall limit is controlled by the chpartnership -bandwidth command and must be set on each involved cluster accordingly. Attention: Do not set this value higher than the default without first establishing that the higher bandwidth can be sustained without impacting host performance. The limit should never be above the maximum support by the infrastructure connecting the remote sites regardless of the compression rates you may achieve. -gminterdelaysimulation link_tolerance This parameter specifies the number of milliseconds that I/O activity (intercluster copying to a auxiliary volume) is delayed. This parameter permits you to test performance implications before deploying Global Mirror and obtaining a long distance link. Specify a value from 0 to 100 milliseconds in 1 millisecond increments. The default value is 0. Use this argument to test each intercluster Global Mirror relationship separately. -gmintradelaysimulation link_tolerance This parameter specifies the number of milliseconds that I/O activity (intracluster copying to a auxiliary volume) is delayed. This parameter permits you to test performance implications before deploying Global Mirror and obtaining a long distance link. Specify a value from 0 to 100 milliseconds in 1 millisecond increments. The default value is 0. Use this argument to test each intracluster Global Mirror relationship separately. Use the svctask chcluster command to adjust these values; see the following example: svctask chcluster -gmlinktolerance 300 You can view all of these parameter values with the svcinfo lscluster <clustername> command.

Chapter 8. Advanced Copy Services

455

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

gmlinktolerance
The gmlinktolerance parameter needs a particular and detailed note. If poor response extends past the specified tolerance, a 1920 event is logged and one or more Global Mirror relationships are automatically stopped, which protects the application hosts at the primary site. During normal operation, application hosts experience a minimal effect from the response times, because the Global Mirror feature uses asynchronous replication. However, if Global Mirror operations experience degraded response times from the secondary cluster for an extended period of time, I/O operations begin to queue at the primary cluster. This queue results in an extended response time to application hosts. In this situation, the gmlinktolerance feature stops Global Mirror relationships and the application hosts response time returns to normal. After a 1920 event has occurred, the Global Mirror auxiliary volumes are no longer in the consistent_synchronized state until you fix the cause of the event and restart your Global Mirror relationships. For this reason, ensure that you monitor the cluster to track when this 1920 events occur. You can disable the gmlinktolerance feature by setting the gmlinktolerance value to 0 (zero). However, the gmlinktolerance feature cannot protect applications from extended response times if it is disabled. It might be appropriate to disable the gmlinktolerance feature under the following circumstances: During SAN maintenance windows where degraded performance is expected from SAN components and application hosts can withstand extended response times from Global Mirror volumes. During periods when application hosts can tolerate extended response times and it is expected that the gmlinktolerance feature might stop the Global Mirror relationships. For example, if you test using an I/O generator, which is configured to stress the back-end storage, the gmlinktolerance feature might detect the high latency and stop the Global Mirror relationships. Disabling the gmlinktolerance feature prevents this result at the risk of exposing the test host to extended response times. We suggest using a script to periodically monitor the Global Mirror status. Example 8-2 shows an example of a script in ksh to check the Global Mirror status.
Example 8-2 Script example

[AIX1@root] /usr/GMC > cat checkSVCgm #!/bin/sh # # Description # # GM_STATUS GM Status variable # HOSTsvcNAME SVC cluster ipaddress # PARA_TEST Consistent syncronized variable # PARA_TESTSTOPIN Stop inconsistent variable # PARA_TESTSTOP Consistent stopped variable # IDCONS Consistency Group ID variable # variable definition HOSTsvcNAME="128.153.3.237" IDCONS=255 PARA_TEST="consistent_synchronized" PARA_TESTSTOP="consistent_stopped" PARA_TESTSTOPIN="inconsistent_stopped" FLOG="/usr/GMC/log/gmtest.log" VAR=0 456
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

# Start Programm if [[ $1 == "" ]] then CICLI="true" fi while $CICLI do GM_STATUS=`ssh -l admin $HOSTsvcNAME svcinfo lsrcconsistgrp -delim : | awk -F: 'NR==2 {print $8 }'` echo "`date` Gobal Mirror STATUS <$GM_STATUS> " >> $FLOG if [[ $GM_STATUS = $PARA_TEST ]] then sleep 600 else sleep 600 GM_STATUS=`ssh -l admin $HOSTsvcNAME svcinfo lsrcconsistgrp -delim : | awk -F: 'NR==2 {print $8 }'` if [[ $GM_STATUS = $PARA_TESTSTOP || $GM_STATUS = $PARA_TESTSTOPIN ]] then ssh -l admin $HOSTsvcNAME svctask startrcconsistgrp -force $IDCONS TESTEX=`echo $?` echo "`date` Gobal Mirror RESTARTED.......... con RC=$TESTEX " >> $FLOG fi GM_STATUS=`ssh -l admin $HOSTsvcNAME svcinfo lsrcconsistgrp -delim : | awk -F: 'NR==2 {print $8 }'` if [[ $GM_STATUS = $PARA_TESTSTOP ]] then echo "`date` Global Mirror restarted <$GM_STATUS>" else echo "`date` ERROR Global Mirro Failed <$GM_STATUS>" fi sleep 600 fi ((VAR+=1)) done PARA_TESTSTOP="consistent_stopped" The script in Example 8-2 on page 456 performs these functions: Check the Global Mirror status every 600 seconds. If the status is ConsistentSyncronized, wait another 600 seconds and test again. If the status is ConsistentStopped or InconsistentStopped, wait another 600 seconds and then try to restart Global Mirror. If the status remains ConsistentStopped or InconsistentStopped, it is likely that an associated 1920 event exists, which means that we might have a performance problem. Waiting 600 seconds before restarting Global Mirror can give the SVC enough time to deliver the high workload that is requested by the server. Because Global Mirror has been stopped for 10 minutes (600 seconds), the auxiliary copy is now out of date by this amount of time and must be resynchronized. Sample script: The script described in Example 8-2 on page 456 is supplied as is.

Chapter 8. Advanced Copy Services

457

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

A 1920 event indicates that one or more of the SAN components are unable to provide the performance that is required by the application hosts. This situation can be temporary (for example, a result of a maintenance activity) or permanent (for example, a result of a hardware failure or an unexpected host I/O workload). If 1920 events are occurring, it can be necessary to use a performance monitoring and analysis tool, such as the IBM Tivoli Storage Productivity Center, to assist in identifying and resolving the problem.

8.10.2 Creating an SVC cluster partnership


To create an SVC cluster partnership, use the svctask mkpartnership command.

svctask mkpartnership
Use the svctask mkpartnership command to establish a one-way Global Mirror partnership between the local cluster and a remote cluster. To establish a fully functional Global Mirror partnership, you must issue this command on both clusters. This step is a prerequisite for creating Global Mirror relationships between volumes on the SVC clusters. When creating the partnership, you can specify the bandwidth to be used by the background copy process between the local and the remote SVC cluster, and if it is not specified, the bandwidth defaults to 50 MBps. The bandwidth must be set to a value that is less than or equal to the bandwidth that can be sustained by the intercluster link.

Background copy bandwidth effect on foreground I/O latency


The background copy bandwidth determines the rate at which the background copy will be attempted for Global Mirror. The background copy bandwidth can affect foreground I/O latency in one of three ways: The following result can occur if the background copy bandwidth is set too high compared to the Global Mirror intercluster link capacity: The background copy I/Os can back up on the Global Mirror intercluster link. There is a delay in the synchronous auxiliary writes of foreground I/Os. The foreground I/O latency will increase as perceived by applications. If the background copy bandwidth is set too high for the storage at the primary site, background copy read I/Os overload the primary storage and delay foreground I/Os. If the background copy bandwidth is set too high for the storage at the secondary site, background copy writes at the secondary overload the secondary storage and again delay the synchronous secondary writes of foreground I/Os. To set the background copy bandwidth optimally, make sure that you consider all three resources (the primary storage, the intercluster link bandwidth, and the secondary storage). Provision the most restrictive of these three resources between the background copy bandwidth and the peak foreground I/O workload. Perform this provisioning by calculation or, alternatively, by determining experimentally how much background copy can be allowed before the foreground I/O latency becomes unacceptable and then reducing the background copy to accommodate peaks in workload and an additional safety margin.

svctask chpartnership
To change the bandwidth that is available for background copy in an SVC cluster partnership, use the svctask chpartnership command to specify the new bandwidth.

458

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

8.10.3 Creating a Global Mirror Consistency Group


To create a Global Mirror Consistency Group, use the svctask mkrcconsistgrp command.

svctask mkrcconsistgrp
Use the svctask mkrcconsistgrp command to create a new, empty Global Mirror Consistency Group. The Global Mirror Consistency Group name must be unique across all Consistency Groups that are known to the clusters owning this Consistency Group. If the Consistency Group involves two clusters, the clusters must be in communication throughout the creation process. The new Consistency Group does not contain any relationships and will be in the Empty state. You can add Global Mirror relationships to the group, either upon creation or afterward, by using the svctask chrelationship command.

8.10.4 Creating a Global Mirror relationship


To create a Global Mirror relationship, use the svctask mkrcrelationship command. Optional parameter: If you do not use the -global optional parameter, a Metro Mirror relationship will be created instead of a Global Mirror relationship.

svctask mkrcrelationship
Use the svctask mkrcrelationship command to create a new Global Mirror relationship. This relationship persists until it is deleted. The auxiliary volume must be equal in size to the master volume or the command will fail, and if both volumes are in the same cluster, they must both be in the same I/O Group. The master and auxiliary volume cannot be in an existing relationship, and they cannot be the target of a FlashCopy mapping. This command returns the new relationship (relationship_id) when successful. When creating the Global Mirror relationship, you can add it to a Consistency Group that already exists, or it can be a stand-alone Global Mirror relationship if no Consistency Group is specified. To check whether the master or auxiliary volumes comply with the prerequisites to participate in a Global Mirror relationship, use the svcinfo lsrcrelationshipcandidate command, as shown in svcinfo lsrcrelationshipcandidate on page 459.

svcinfo lsrcrelationshipcandidate
Use the svcinfo lsrcrelationshipcandidate command to list the available volumes that are eligible to form a Global Mirror relationship. When issuing the command, you can specify the master volume name and auxiliary cluster to list candidates that comply with the prerequisites to create a Global Mirror relationship. If the command is issued with no parameters, all volumes that are not disallowed by another configuration state, such as being a FlashCopy target, are listed.

8.10.5 Changing a Global Mirror relationship


To modify the properties of a Global Mirror relationship, use the svctask chrcrelationship command.

Chapter 8. Advanced Copy Services

459

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

svctask chrcrelationship
Use the svctask chrcrelationship command to modify the following properties of a Global Mirror relationship: Change the name of a Global Mirror relationship. Add a relationship to a group. Remove a relationship from a group using the -force flag. Adding a Global Mirror relationship: When adding a Global Mirror relationship to a Consistency Group that is not empty, the relationship must have the same state and copy direction as the group to be added to it.

8.10.6 Changing a Global Mirror Consistency Group


To change the name of a Global Mirror Consistency Group, use the following command.

svctask chrcconsistgrp
Use the svctask chrcconsistgrp command to change the name of a Global Mirror Consistency Group.

8.10.7 Starting a Global Mirror relationship


To start a stand-alone Global Mirror relationship, use the following command.

svctask startrcrelationship
Use the svctask startrcrelationship command to start the copy process of a Global Mirror relationship. When issuing the command, you can set the copy direction if it is undefined, and, optionally, you can mark the auxiliary volume of the relationship as clean. The command fails if it is used as an attempt to start a relationship that is already a part of a Consistency Group. You can only issue this command to a relationship that is connected. For a relationship that is idling, this command assigns a copy direction (master and auxiliary roles) and begins the copy process. Otherwise, this command restarts a previous copy process that was stopped either by a stop command or by an I/O error. If the resumption of the copy process leads to a period when the relationship is inconsistent, you must specify the -force parameter when restarting the relationship. This situation can arise if, for example, the relationship was stopped and then further writes were performed on the original master of the relationship. The use of the -force parameter here is a reminder that the data on the auxiliary will become inconsistent while resynchronization (background copying) takes place and, therefore, is unusable for DR purposes before the background copy has completed. In the Idling state, you must specify the master volume to indicate the copy direction. In other connected states, you can provide the -primary argument, but it must match the existing setting.

8.10.8 Stopping a Global Mirror relationship


To stop a stand-alone Global Mirror relationship, use the svctask stoprcrelationship command.

460

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

svctask stoprcrelationship
Use the svctask stoprcrelationship command to stop the copy process for a relationship. You can also use this command to enable write access to a consistent auxiliary volume by specifying the -access parameter. This command applies to a stand-alone relationship. It is rejected if it is addressed to a relationship that is part of a Consistency Group. You can issue this command to stop a relationship that is copying from master to auxiliary. If the relationship is in an inconsistent state, any copy operation stops and does not resume until you issue an svctask startrcrelationship command. Write activity is no longer copied from the master to the auxiliary volume. For a relationship in the ConsistentSynchronized state, this command causes a Consistency Freeze. When a relationship is in a consistent state (that is, in the ConsistentStopped, ConsistentSynchronized, or ConsistentDisconnected state), you can use the -access parameter with the svctask stoprcrelationship command to enable write access to the auxiliary volume.

8.10.9 Starting a Global Mirror Consistency Group


To start a Global Mirror Consistency Group, use the svctask startrcconsistgrp command.

svctask startrcconsistgrp
Use the svctask startrcconsistgrp command to start a Global Mirror Consistency Group. You can only issue this command to a Consistency Group that is connected. For a Consistency Group that is idling, this command assigns a copy direction (master and auxiliary roles) and begins the copy process. Otherwise, this command restarts a previous copy process that was stopped either by a stop command or by an I/O error.

8.10.10 Stopping a Global Mirror Consistency Group


To stop a Global Mirror Consistency Group, use the svctask stoprcconsistgrp command.

svctask stoprcconsistgrp
Use the svctask startrcconsistgrp command to stop the copy process for a Global Mirror Consistency Group. You can also use this command to enable write access to the auxiliary volumes in the group if the group is in a consistent state. If the Consistency Group is in an inconsistent state, any copy operation stops and does not resume until you issue the svctask startrcconsistgrp command. Write activity is no longer copied from the master to the auxiliary volumes, which belong to the relationships in the group. For a Consistency Group in the ConsistentSynchronized state, this command causes a Consistency Freeze. When a Consistency Group is in a consistent state (for example, in the ConsistentStopped, ConsistentSynchronized, or ConsistentDisconnected state), you can use the -access parameter with the svctask stoprcconsistgrp command to enable write access to the auxiliary volumes within that group.

8.10.11 Deleting a Global Mirror relationship


To delete a Global Mirror relationship, use the svctask rmrcrelationship command.

Chapter 8. Advanced Copy Services

461

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

svctask rmrcrelationship
Use the svctask rmrcrelationship command to delete the relationship that is specified. Deleting a relationship only deletes the logical relationship between the two volumes. It does not affect the volumes themselves. If the relationship is disconnected at the time that the command is issued, the relationship is only deleted on the cluster on which the command is being run. When the clusters reconnect, the relationship is automatically deleted on the other cluster. Alternatively, if the clusters are disconnected, and you still want to remove the relationship on both clusters, you can issue the rmrcrelationship command independently on both of the clusters. A relationship cannot be deleted if it is part of a Consistency Group. You must first remove the relationship from the Consistency Group. If you delete an inconsistent relationship, the auxiliary volume becomes accessible even though it is still inconsistent. This situation is the one case in which Global Mirror does not inhibit access to inconsistent data.

8.10.12 Deleting a Global Mirror Consistency Group


To delete a Global Mirror Consistency Group, use the svctask rmrcconsistgrp command.

svctask rmrcconsistgrp
Use the svctask rmrcconsistgrp command to delete a Global Mirror Consistency Group. This command deletes the specified Consistency Group. You can issue this command for any existing Consistency Group. If the Consistency Group is disconnected at the time that the command is issued, the Consistency Group is only deleted on the cluster on which the command is being run. When the clusters reconnect, the Consistency Group is automatically deleted on the other cluster. Alternatively, if the clusters are disconnected, and you still want to remove the Consistency Group on both clusters, you can issue the svctask rmrcconsistgrp command separately on both of the clusters. If the Consistency Group is not empty, the relationships within it are removed from the Consistency Group before the group is deleted. These relationships then become stand-alone relationships. The state of these relationships is not changed by the action of removing them from the Consistency Group.

8.10.13 Reversing a Global Mirror relationship


To reverse a Global Mirror relationship, use the svctask switchrcrelationship command.

svctask switchrcrelationship
Use the svctask switchrcrelationship command to reverse the roles of the master volume and the auxiliary volume when a stand-alone relationship is in a consistent state; when issuing the command, the desired master needs to be specified.

8.10.14 Reversing a Global Mirror Consistency Group


To reverse a Global Mirror Consistency Group, use the svctask switchrcconsistgrp command. 462
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

svctask switchrcconsistgrp
Use the svctask switchrcconsistgrp command to reverse the roles of the master volume and the auxiliary volume when a Consistency Group is in a consistent state. This change is applied to all of the relationships in the Consistency Group, and when issuing the command, the desired master needs to be specified.

8.11 Troubleshooting Remote Copy


Remote Copy (Global Mirror and Metro Mirror) has two primary error codes that will be displayed: 1920 or 1720. A 1920 is a congestion error. This means that either the source, the link between source and target, or the target were not able to keep up with the rate of demand asked for. A 1720 on the other hand, is a heartbeat or system partnership communication error. This tends to be more serious, as failing communication between your System Partners involves some extended diagnostic time.

8.11.1 1920 error


Lets focus first on the 1920 error. A 1920 error (event ID 050010) can have several triggers. Official probable cause projections are: Primary 2145 system or SAN fabric problem (10%) Primary 2145 system or SAN fabric configuration (10%) Secondary 2145 system or SAN fabric problem (15%) Secondary 2145 system or SAN fabric configuration (25%) Intercluster link problem (15%) Intercluster link configuration (25%) In practice, the one most often overlooked is latency. Global Mirror has a round-trip-time tolerance limit of 80 milliseconds. That is to say that a message sent from your source SVC cluster to your target SVC Cluster and the accompanying acknowledgement must have a total time of 80 milliseconds or 40 milliseconds each way (for V4.1.1.x and up.) Note: For 4.1.0.x and earlier, this limit was 68 milliseconds or 34 milliseconds one-way for fibre channel extenders, for SAN routers it was 20 milliseconds round-trip or 10 milliseconds one-way. Make sure to use the correct values for the correct versions! The primary component of your round-trip-time is the physical distance between sites. For every 1000 kilometers (621.36 miles) you will observe a 5 millisecond delay. This delay does not include the time added by equipment in the path. Every device will add a varying amount of time, depending on the device, but a good rule of thumb is 25 microseconds for pure hardware devices. For software based functions (such as compression implemented in software) the delay added tends to be much higher (usually in the millisecond plus range.) So now that the physics of the matter are down, lets see how the physics affects physical delay. Company A has a production site that is 1900 kilometers distant from their recovery site. Their network service provider uses a total of 5 devices to connect the two sites. In addition to those devices, Company A employs a SAN Fibre Channel Router at each site to provide FCIP to encapsulate the fibre channel traffic between sites. That is now 7 devices, and 1900 kilometers of distance delay. All the devices are adding 200 microseconds of delay each way. The distance adds 9.5 milliseconds each way, for a total of 19 milliseconds. Combined with the device latency, that is 19.4 milliseconds of physical latency minimum. That is well under the 80 millisecond limit of Global Mirror...until you realize that this is the best case number. The link quality and bandwidth play a big role here. Your network provider will likely guarantee a latency maximum on your network link; be sure to stay as far below the Global Mirror RTT
Chapter 8. Advanced Copy Services

463

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

limit as possible. You can easily double or triple the expected physical latency with a lower quality or lower bandwidth network link and suddenly you are within range of exceeding the limit the moment a large flood of I/O happens that exceeds the bandwidth capacity you have in place. When you get a 1920, always check the latency first. Keep in mind that the FCIP routing layer can introduce latency if not properly configured. If your network provider reports a much lower latency, this could be an indication of a problem at your FCIP Routing layer. Most FCIP Routing devices have built-in tools to allow you to check the RTT. When checking latency, remember that TCP/IP routing devices (including FCIP routers) report RTT or round-trip-time using standard 64 byte ping packets. In Figure 8-41 you can see why the effective transit time should only be measured using packets large enough to hold a fibre channel frame. This is 2148 bytes (2112 bytes of payload and 36 bytes of header) and you should allow some overhead to be safe, as different switching vendors have optional features which may increase this size. Once you have verified your latency using the proper packet size, then proceed with normal hardware troubleshooting. Before we proceed lets take a quick look at the second largest component of your round-trip-time, serialization delay. Serialization delay is simply the amount of time required to move a packet of data of a specific size across a network link of a given bandwidth. This is based on a pretty simple concept--the time required to move a specific amount of data decreases as the data transmission rate increases. Look again at Figure 8-41 and notice the orders of magnitude of different between the different link bandwidths and it is easy to see how 1920 errors can arise, when your bandwidth is insufficient and why you should never use a TCP/IP ping to measure round-trip-time for FCIP traffic.

Figure 8-41 The effect of packet size (in bytes) versus the link size

Figure 8-41 compares the amount of time in microseconds required to transmit a packet across network links of varying bandwidth capacity. Three packet sizes are used: 1. 64 bytes: The size of the common ping packet.

464

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 08 Advanced Copy Services Mark.fm

2. 1500 bytes: The size of the standard TCP/IP packet. 3. 2148 bytes: The size of a fibre channel frame. Finally, remember that your path MTU affects the delay incurred in getting a packet from one location to another, when it causes fragmentation or is too large and causes too many retransmits when a packet is lost.

8.11.2 1720 error


Now the 1720 error (event ID 050020) is the other nemesis of successful Remote Copy. Since the term System Partnership implies that all involved virtualization systems are partners, they must talk. The amount of bandwidth needed for system-to-system communications varies based on the number of nodes, but the important fact is that it is not zero. When a partner on either side stops communication you will see a 1720 appear in your error log. According to official documentation there are no likely field replaceable units breakages or other causes. In practice the source of this error is most often a fabric problem or a problem the network path between your partners. When you receive this error, if your fabric has more than 64 HBA ports zoned, you should check your fabric configuration for zoning of more than one HBA port for each node per I/O Group. One port for each node per I/O Group associated with the host is the recommended zoning configuration for fabrics. For those fabrics with 64 or more host ports, this recommendation becomes a rule. You must follow this zoning rule or the configuration is technically un-supported. Improper zoning will lead to SAN congestion and this can inhibit remote link communication intermittently. Checking the zero buffer credit timer via IBM Tivoli TotalStorage Productivity Center and comparing against your sample interval will reveal potential SAN Congestion. Anytime a zero buffer credit timer is above two percent of the total time of the sample interval, it is likely to cause problems. Next, always ask your network provider to check the status of the link. If the link is okay, watch for repeats of this error. It is possible in a normal and functional network setup to have occasional 1720 errors, but multiple occurrences point to a larger problem. If you receive multiple 1720 errors, recheck your network connection and then check the V7000 partnership information to verify their status and settings and then proceed to perform diagnostics for every piece of equipment in the path between the two V7000s. It often helps to have a diagram showing the path of your replication from both logical and physical configuration viewpoints. If your investigations fail to resolve your remote copy problems, you should contact your IBM support representative for more complete analysis.

Chapter 8. Advanced Copy Services

465

7933 08 Advanced Copy Services Mark.fm

Draft Document for Review January 17, 2012 6:10 am

466

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

Chapter 9.

SAN Volume Controller operations using the command-line interface


In this chapter we describe operational management. We use the command-line interface (CLI) to demonstrate both normal operation and then advanced operation. You can use either the CLI or GUI to manage IBM System Storage SAN Volume Controller (SVC) operations. We use the CLI in this chapter. You can script these operations, and we think it is easier to create the documentation for the scripts using the CLI. This chapter assumes a fully functional SVC environment.

Copyright IBM Corp. 2011. All rights reserved.

467

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

9.1 Normal operations using CLI


In the following topics, we describe the commands that best represent normal operational commands.

9.1.1 Command syntax and online help


Command prefix changes: The svctask and svcinfo command prefixes are no longer needed when issuing a command. If you have existing scripts that use those prefixes, they will continue to function. You do not need to change your scripts. Two major command sets are available: The svcinfo command set allows you to query the various components within the SVC environment. The svctask command set allows you to make changes to the various components within the SVC. When the command syntax is shown you will see certain parameters in square brackets, for example [parameter]. This indicates that the parameter is optional in most if not all instances. Any information that is not in square brackets is required information. You can view the syntax of a command by entering one of the following commands: svcinfo svctask svcinfo svctask svcinfo -? -? commandname -? commandname -? commandname -filtervalue? Shows a complete list of information commands. Shows a complete list of task commands. Shows the syntax of information commands. Shows the syntax of task commands. Shows the filters you can use to reduce the output of the information commands.

Help: You can also use -h instead of -?, for example, the svcinfo -h or svctask commandname -h command. If you look at the syntax of the command by typing svcinfo command name -?, you often see -filter listed as a parameter. Be aware that the correct parameter is -filtervalue. Tip: You can use the up and down arrow keys on your keyboard to recall commands that were recently issued. Then, you can use the left and right, Backspace, and Delete keys to edit commands before you resubmit them.

Using shortcuts
You can use this command to display a list of display or execution commands. This command produces an alphabetical list of actions that are supported. The command parameter must be svcinfo for display commands or svctask for execution commands. The model parameter allows for different shortcuts on different platforms: 2145 or 2076. <command> Shortcuts <model> See Example 9-1 on page 469(some lines have been removed from command output for brevity).

468

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

Example 9-1 shortcuts command

IBM_2145:ITSO_SVC1:admin>svctask shortcuts 2145 addcontrolenclosure addhostiogrp addhostport addmdisk addnode addvdiskcopy applydrivesoftware applysoftware cancellivedump cfgportip chhost chiogrp chldap chldapserver chlicense chmdisk chmdiskgrp chnode chnodehw chpartnership chquorum chrcconsistgrp mkemailserver mkemailuser mkfcconsistgrp mkfcmap mkhost mkldapserver mkmdiskgrp mkpartnership mkrcconsistgrp mkrcrelationship mksnmpserver mksyslogserver mkuser mkusergrp mkvdisk mkvdiskhostmap prmmdisk rmmdiskgrp rmnode rmpartnership rmportip rmrcconsistgrp triggerlivedump writesernum

Using reverse-i-search
If you work on your SVC with the same PuTTy session for many hours and enter many commands, then scrolling back to find your previous or similar commands can be a time-intensive task. In this case, using the reverse-i-search command can help you quickly and easily find any command you already issued in the history of your commands by using
Chapter 9. SAN Volume Controller operations using the command-line interface

469

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

the Ctrl+r keys. Ctrl+r will allow you to interactively search through the command history as you type in commands. Pressing Ctrl+r at an empty command prompt will give you a prompt as shown in Example 9-2.
Example 9-2 Using reverse-i-search

IBM_2145:ITSO_SVC1:admin>lsiogrp id name node_count vdisk_count 0 io_grp0 2 10 1 io_grp1 2 10 2 io_grp2 0 0 3 io_grp3 0 0 4 recovery_io_grp 0 0 (reverse-i-search)`i': lsiogrp

host_count 8 8 0 0 0

As shown in Example 9-2, we had executed an lsiogrp command. By then pressing Ctrl+r and typing sv, the command we needed was recalled from history.

9.2 Working with managed disks and disk controller systems


This section details the various configuration and administration tasks that you can perform on the managed disks (MDisks) within the SVC environment and the tasks that you can perform at a disk controller level.

9.2.1 Viewing disk controller details


Use the lscontroller command to display summary information about all available back-end storage systems. To display more detailed information about a specific controller, run the command again and append the controller name parameter, for example, controller id 2, as shown in Example 9-3.
Example 9-3 lscontroller command IBM_2145:ITSO_SVC1:admin>lscontroller 2 id 2 controller_name DS3500 WWNN 20080080E51B09E8 mdisk_link_count 10 max_mdisk_link_count 10 degraded no vendor_id LSI product_id_low INF-01-0 product_id_high 0 product_revision 0770 ctrl_s/n b Ns M allow_quorum yes WWPN 20680080E51B09E8 path_count 12 max_path_count 24 WWPN 20690080E51B09E8 path_count 8 max_path_count 20 WWPN 20580080E51B09E8 path_count 12 max_path_count 12 WWPN 20590080E51B09E8

470

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

path_count 8 max_path_count 20 IBM_2145:ITSO_SVC1:admin>

9.2.2 Renaming a controller


Use the chcontroller command to change the name of a storage controller. To verify the change, run the lscontroller command. Example 9-4 shows both of these commands.
Example 9-4 chcontroller command

IBM_2145:ITSO_SVC1:admin>chcontroller -name ITSO-DS3500 DS3500 IBM_2145:ITSO_SVC1:admin>lscontroller id controller_name ctrl_s/n vendor_id product_id_low product_id_high 0 ITSO-DS5000 LSI INF-01-0 0 2 ITSO-DS3500 b Ns M LSI INF-01-0 0 IBM_2145:ITSO_SVC1:admin> This command renames the controller named controller0 to DS4500. Choosing a new name: The chcontroller command specifies the new name first. You can use letters A to Z, a to z, numbers 0 to 9, the dash (-), and the underscore (_). The new name can be between one and 63 characters in length. However, the new name cannot start with a number, dash, or the word controller (because this prefix is reserved for SVC assignment only).

9.2.3 Discovery status


Use the lsdiscoverystatus command, as shown in Example 9-5, to determine if a discovery operation is in progress. The output of this command is a status of active or inactive.
Example 9-5 lsdiscoverystatus command

IBM_2145:ITSO_SVC1:admin>lsdiscoverystatus id scope IO_group_id IO_group_name status 0 fc_fabric inactive This command displays the state of all discoveries in the clustered system. During discovery, the system updates the drive and MDisk records. You must wait until the discovery has finished and is inactive before you attempt to use the system. This command displays one of the following results: active: There is a discovery operation in progress at the time that the command is issued. inactive: There are no discovery operations in progress at the time that the command is issued.

9.2.4 Discovering MDisks


In general, the clustered system detects the MDisks automatically when they appear in the network. However, certain Fibre Channel (FC) controllers do not send the required Small

Chapter 9. SAN Volume Controller operations using the command-line interface

471

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

Computer System Interface (SCSI) primitives that are necessary to automatically discover the new MDisks. If new storage has been attached and the clustered system has not detected it, it might be necessary to run this command before the system can detect the new MDisks. Use the detectmdisk command to scan for newly added MDisks (Example 9-6).
Example 9-6 detectmdisk

IBM_2145:ITSO_SVC1:admin>detectmdisk To check whether any newly added MDisks were successfully detected, run the lsmdisk command and look for new unmanaged MDisks. If the disks do not appear, check that the disk is appropriately assigned to the SVC in the disk subsystem, and that the zones are set up properly. Note: If you have assigned a large number of logical unit numbers (LUNs) to your SVC, the discovery process can take time. Check several times, using the lsmdisk command if all of the MDisks that you were expecting are present. When all of the disks allocated to the SVC are seen from the SVC system, the following procedure is a useful way to verify which MDisks are unmanaged and ready to be added to the storage pool. Perform the following steps to display MDisks: 1. Enter the lsmdiskcandidate command, as shown in Example 9-7. This command displays all detected MDisks that are not currently part of a storage poll.
Example 9-7 lsmdiskcandidate command IBM_2145:ITSO_SVC1:admin>lsmdiskcandidate id 0 1 2 . .

Alternatively, you can list all MDisks (managed or unmanaged) by issuing the lsmdisk command, as shown in Example 9-8.
Example 9-8 lsmdisk command IBM_2145:ITSO_SVC1:admin>lsmdisk -filtervalue controller_name=ITSO-DS3500 id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID tier 0 mdisk0 online managed 0 STGPool_DS3500-1 128.0GB 0000000000000000 ITSO-DS3500 60080e50001b0b62000007b04e731e4d00000000000000000000000000000000 generic_hdd 1 mdisk1 online managed 0 STGPool_DS3500-1 128.0GB 0000000000000001 ITSO-DS3500 60080e50001b0b62000007b24e731e6000000000000000000000000000000000 generic_hdd 2 mdisk2 online managed 0 STGPool_DS3500-1 128.0GB 0000000000000002 ITSO-DS3500 60080e50001b09e8000006f44e731bdc00000000000000000000000000000000 generic_hdd 3 mdisk3 online managed 1 STGPool_DS3500-2 128.0GB 0000000000000003 ITSO-DS3500 60080e50001b0b62000007b44e731e8400000000000000000000000000000000 generic_hdd 4 mdisk4 online managed 1 STGPool_DS3500-2 128.0GB 0000000000000004 ITSO-DS3500 60080e50001b09e8000006f64e731bff00000000000000000000000000000000 generic_hdd

472

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

5 mdisk5 online managed 1 STGPool_DS3500-2 128.0GB 0000000000000005 ITSO-DS3500 60080e50001b0b62000007b64e731ea900000000000000000000000000000000 generic_hdd 6 mdisk6 online unmanaged 10.0GB 0000000000000006 ITSO-DS3500 60080e50001b09e80000085f4e7d60dd00000000000000000000000000000000 generic_hdd

From this output, you can see additional information about each MDisk (such as the current status). For the purpose of our current task, we are only interested in the unmanaged disks, because they are candidates for a storage pool. Tip: The -delim parameter collapses output instead of wrapping text over multiple lines. 2. If not all of the MDisks that you expected are visible, rescan the available FC network by entering the detectmdisk command, as shown in Example 9-9.
Example 9-9 detectmdisk IBM_2145:ITSO_SVC1:admin>detectmdisk

3. If you run the lsmdiskcandidate command again and your MDisk or MDisks are still not visible, check that the LUNs from your subsystem have been properly assigned to the SVC and that appropriate zoning is in place (for example, the SVC can see the disk subsystem). See Chapter 3, Planning and configuration on page 67 for details about setting up your storage area network (SAN) fabric.

9.2.5 Viewing MDisk information


When viewing information about the MDisks (managed or unmanaged), we can use the lsmdisk command to display overall summary information about all available managed disks. To display more detailed information about a specific MDisk, run the command again and append the -mdisk name parameter (for example, mdisk0). The overview command is lsmdisk -delim, as shown in Example 9-10. The summary for an individual MDisk is lsmdisk (name/ID of the MDisk from which you want the information), as shown in Example 9-11 on page 474.
Example 9-10 lsmdisk command IBM_2145:ITSO_SVC1:admin>lsmdisk -delim : id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name:UID:tie r 0:mdisk0:online:managed:0:STGPool_DS3500-1:128.0GB:0000000000000000:ITSO-DS3500:60080e50001 b0b62000007b04e731e4d00000000000000000000000000000000:generic_hdd 1:mdisk1:online:managed:0:STGPool_DS3500-1:128.0GB:0000000000000001:ITSO-DS3500:60080e50001 b0b62000007b24e731e6000000000000000000000000000000000:generic_hdd 2:mdisk2:online:managed:0:STGPool_DS3500-1:128.0GB:0000000000000002:ITSO-DS3500:60080e50001 b09e8000006f44e731bdc00000000000000000000000000000000:generic_hdd 3:mdisk3:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000003:ITSO-DS3500:60080e50001 b0b62000007b44e731e8400000000000000000000000000000000:generic_hdd 4:mdisk4:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000004:ITSO-DS3500:60080e50001 b09e8000006f64e731bff00000000000000000000000000000000:generic_hdd 5:mdisk5:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000005:ITSO-DS3500:60080e50001 b0b62000007b64e731ea900000000000000000000000000000000:generic_hdd 6:mdisk6:online:unmanaged:::10.0GB:0000000000000006:ITSO-DS3500:60080e50001b09e80000085f4e7 d60dd00000000000000000000000000000000:generic_hdd

Example 9-11 on page 474 shows a summary for a single MDisk.

Chapter 9. SAN Volume Controller operations using the command-line interface

473

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

Example 9-11 Usage of the command lsmdisk (ID) IBM_2145:ITSO_SVC1:admin>lsmdisk 0 id 0 name mdisk0 status online mode managed mdisk_grp_id 0 mdisk_grp_name STGPool_DS3500-1 capacity 128.0GB quorum_index 1 block_size 512 controller_name ITSO-DS3500 ctrl_type 4 ctrl_WWNN 20080080E51B09E8 controller_id 2 path_count 4 max_path_count 4 ctrl_LUN_# 0000000000000000 UID 60080e50001b0b62000007b04e731e4d00000000000000000000000000000000 preferred_WWPN 20580080E51B09E8 active_WWPN 20580080E51B09E8 fast_write_state empty raid_status raid_level redundancy strip_size spare_goal spare_protection_min balanced tier generic_hdd

9.2.6 Renaming an MDisk


Use the chmdisk command to change the name of an MDisk. When using the command, be aware that the new name comes first and then the ID/name of the MDisk being renamed. Use this format: chmdisk -name (new name) (current ID/name). Use the lsmdisk command to verify the change. Example 9-12 show both of these commands.
Example 9-12 chmdisk command IBM_2145:ITSO_SVC1:admin>chmdisk -name mdisk_0 mdisk0

This command renamed the MDisk named mdisk0 to mdisk_0. The chmdisk command: The chmdisk command specifies the new name first. You can use letters A to Z, a to z, numbers 0 to 9, the dash (-), and the underscore (_). The new name can be between one and 63 characters in length. However, the new name cannot start with a number, dash, or the word mdisk (because this prefix is reserved for SVC assignment only).

9.2.7 Including an MDisk


If a significant number of errors occur on an MDisk, the SVC automatically excludes it. These errors can result from a hardware problem, a SAN problem, or the result of poorly planned maintenance. If it is a hardware fault, you can receive a Simple Network Management Protocol (SNMP) alert about the state of the disk subsystem (before the disk was excluded), 474
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

and you can undertake preventive maintenance. If not, the hosts that were using virtual disks (VDisks), which used the excluded MDisk, now have I/O errors. By running the lsmdisk command, you can see that mdisk0 is excluded in Example 9-13.
Example 9-13 lsmdisk command: Excluded MDisk IBM_2145:ITSO_SVC1:admin>lsmdisk -delim : id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name:UID:tie r 0:mdisk0:excluded:managed:0:STGPool_DS3500-1:128.0GB:0000000000000000:ITSO-DS3500:60080e500 01b0b62000007b04e731e4d00000000000000000000000000000000:generic_hdd 1:mdisk1:online:managed:0:STGPool_DS3500-1:128.0GB:0000000000000001:ITSO-DS3500:60080e50001 b0b62000007b24e731e6000000000000000000000000000000000:generic_hdd 2:mdisk2:online:managed:0:STGPool_DS3500-1:128.0GB:0000000000000002:ITSO-DS3500:60080e50001 b09e8000006f44e731bdc00000000000000000000000000000000:generic_hdd 3:mdisk3:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000003:ITSO-DS3500:60080e50001 b0b62000007b44e731e8400000000000000000000000000000000:generic_hdd 4:mdisk4:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000004:ITSO-DS3500:60080e50001 b09e8000006f64e731bff00000000000000000000000000000000:generic_hdd 5:mdisk5:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000005:ITSO-DS3500:60080e50001 b0b62000007b64e731ea900000000000000000000000000000000:generic_hdd 6:mdisk6:online:unmanaged:::10.0GB:0000000000000006:ITSO-DS3500:60080e50001b09e80000085f4e7 d60dd00000000000000000000000000000000:generic_hdd

After taking the necessary corrective action to repair the MDisk (for example, replace the failed disk, repair the SAN zones, and so on), we need to include the MDisk again by issuing the includemdisk command (Example 9-14), because the SVC system does not include the MDisk automatically.
Example 9-14 includemdisk IBM_2145:ITSO_SVC1:admin>includemdisk mdisk0

Running the lsmdisk command again shows mdisk0 online again; see Example 9-15.
Example 9-15 lsmdisk command: Verifying that MDisk is included IBM_2145:ITSO_SVC1:admin>lsmdisk -delim : id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name:UID:tie r 0:mdisk0:online:managed:0:STGPool_DS3500-1:128.0GB:0000000000000000:ITSO-DS3500:60080e50001 b0b62000007b04e731e4d00000000000000000000000000000000:generic_hdd 1:mdisk1:online:managed:0:STGPool_DS3500-1:128.0GB:0000000000000001:ITSO-DS3500:60080e50001 b0b62000007b24e731e6000000000000000000000000000000000:generic_hdd 2:mdisk2:online:managed:0:STGPool_DS3500-1:128.0GB:0000000000000002:ITSO-DS3500:60080e50001 b09e8000006f44e731bdc00000000000000000000000000000000:generic_hdd 3:mdisk3:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000003:ITSO-DS3500:60080e50001 b0b62000007b44e731e8400000000000000000000000000000000:generic_hdd 4:mdisk4:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000004:ITSO-DS3500:60080e50001 b09e8000006f64e731bff00000000000000000000000000000000:generic_hdd 5:mdisk5:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000005:ITSO-DS3500:60080e50001 b0b62000007b64e731ea900000000000000000000000000000000:generic_hdd 6:mdisk6:online:unmanaged:::10.0GB:0000000000000006:ITSO-DS3500:60080e50001b09e80000085f4e7 d60dd00000000000000000000000000000000:generic_hdd

Chapter 9. SAN Volume Controller operations using the command-line interface

475

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

9.2.8 Adding MDisks to a storage pool


If you created an empty storage pool or you simply assign additional MDisks to your already configured storage poll, you can use the addmdisk command to populate the storage pool. (Example 9-16).
Example 9-16 addmdisk command IBM_2145:ITSO_SVC1:admin>addmdisk -mdisk mdisk6 STGPool_Multi_Tier

You can only add unmanaged MDisks to a storage pool. This command adds the MDisk named mdisk6 to the storage pool named STGPool_Multi_Tier. Important: Do not add this MDisk to a storage pool if you want to create an image mode volume from the MDisk that you are adding. As soon as you add an MDisk to a storage pool it becomes managed, and extent mapping is not necessarily one-to-one anymore.

9.2.9 Showing MDisks in a storage pool


Use the lsmdisk -filtervalue command, as shown in Example 9-17, to see which MDisks are part of a specific storage pool. This command shows all of the MDisks that are part of a storage pool if they belong to the Storage Subsystem named STGPool_DS3500-1.
Example 9-17 lsmdisk -filtervalue: Mdisks in MDG IBM_2145:ITSO_SVC1:admin>lsmdisk -filtervalue mdisk_grp_name=STGPool_DS3500-1 id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID tier 0 mdisk0 online managed 0 STGPool_DS3500-1 128.0GB 0000000000000000 DS3500 60080e50001b0b62000007b04e731e4d00000000000000000000000000000000 generic_hdd 1 mdisk1 online managed 0 STGPool_DS3500-1 128.0GB 0000000000000001 DS3500 60080e50001b0b62000007b24e731e6000000000000000000000000000000000 generic_hdd 2 mdisk2 online managed 0 STGPool_DS3500-1 128.0GB 0000000000000002 DS3500 60080e50001b09e8000006f44e731bdc00000000000000000000000000000000 generic_hdd

As you can see in Example 9-17, with this command, by using a wild card, you will be able to see all the MDisks present in the storage pools named STGPool_* where the asterisk (*) is a wild card.

9.2.10 Working with a storage pool


Before we can create any volumes on the SVC clustered system, we need to virtualize the allocated storage that is assigned to the SVC. After we have assigned volumes to the SVCs managed disks, we cannot start using them until they are members of a storage pool. Therefore, one of our first operations is to create a storage pool where we can place our MDisks. This section describes the operations using MDisks and the storage pool. It explains the tasks that we can perform at storage pool level.

9.2.11 Creating a storage pool


After a successful login to the CLI interface of the SVC, we create the storage pool. Using the mkmdiskgrp command, create a storage pool, as shown in Example 9-18 on page 477. 476
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

Example 9-18 mkmdiskgrp IBM_2145:ITSO_SVC1:admin>mkmdiskgrp -name STGPool_Multi_Tier -ext 256 MDisk Group, id [3], successfully created

This command creates a storage pool called STGPool_Multi_Tier. The extent size that is used within this group is 256 MB. We have not added any MDisks to the storage pool yet, so it is an empty storage pool. You can add unmanaged MDisks and create the storage pool in the same command. Use the command mkmdiskgrp with the -mdisk parameter and enter the IDs or names of the MDisks. This will add the MDisks immediately after the storage pool is created. Prior to the creation of the storage pool, enter the lsmdisk command as shown in Example 9-19. This lists all of the available MDisks that are seen by the SVC system.
Example 9-19 Listing available MDisks IBM_2145:ITSO_SVC1:admin>lsmdisk -filtervalue controller_name=ITSO-DS3500 -delim : id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name:UID:tie r 0:mdisk0:online:managed:0:STGPool_DS3500-1:128.0GB:0000000000000000:ITSO-DS3500:60080e50001 b0b62000007b04e731e4d00000000000000000000000000000000:generic_hdd 1:mdisk1:online:managed:0:STGPool_DS3500-1:128.0GB:0000000000000001:ITSO-DS3500:60080e50001 b0b62000007b24e731e6000000000000000000000000000000000:generic_hdd 2:mdisk2:online:managed:0:STGPool_DS3500-1:128.0GB:0000000000000002:ITSO-DS3500:60080e50001 b09e8000006f44e731bdc00000000000000000000000000000000:generic_hdd 3:mdisk3:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000003:ITSO-DS3500:60080e50001 b0b62000007b44e731e8400000000000000000000000000000000:generic_hdd 4:mdisk4:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000004:ITSO-DS3500:60080e50001 b09e8000006f64e731bff00000000000000000000000000000000:generic_hdd 5:mdisk5:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000005:ITSO-DS3500:60080e50001 b0b62000007b64e731ea900000000000000000000000000000000:generic_hdd 6:mdisk6:online:unmanaged:::10.0GB:0000000000000006:DS3500:60080e50001b09e80000085f4e7d60dd 00000000000000000000000000000000:generic_hdd 8:mdisk7:online:unmanaged:::10.0GB:00 00000000000008:DS3500:60080e50001b09e8000008614e7d8a2c00000000000000000000000000000000:gene ric_hdd

Using the same command as before (mkmdiskgrp) and knowing the MDisk IDs that we are using, we can add multiple MDisks to the storage pool at the same time. We now add the unmanaged MDisks to the storage pool that we created, as shown in Example 9-20.
Example 9-20 Creating a storage pool and adding available MDisks IBM_2145:ITSO_SVC1:admin>mkmdiskgrp -name STGPool_DS5000 -ext 256 -mdisk 6:8 MDisk Group, id [2], successfully created

This command creates a storage pool called STGPool_DS5000. The extent size that is used within this group is 256 MB, and two MDisks (6 and 8) are added to the storage pool.

Chapter 9. SAN Volume Controller operations using the command-line interface

477

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

Storage pool name: The -name and -mdisk parameters are optional. If you do not enter a -name, the default is MDiskgrpx, where x is the ID sequence number that is assigned by the SVC internally. If you do not enter the -mdisk parameter, an empty storage pool is created. If you want to provide a name, you can use letters A to Z, a to z, numbers 0 to 9, and the underscore. The name can be between one and 63 characters in length, but it cannot start with a number or the word MDiskgrp (because this prefix is reserved for SVC assignment only). By running the lsmdisk command, you now see the MDisks as managed and as part of the STGPool_DS3500-1, as shown in Example 9-21.
Example 9-21 lsmdisk command IBM_2145:ITSO_SVC1:admin>lsmdisk -filtervalue controller_name=ITSO-DS3500 -delim : id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name:UID:tie r 0:mdisk0:online:managed:0:STGPool_DS3500-1:128.0GB:0000000000000000:ITSO-DS3500:60080e50001 b0b62000007b04e731e4d00000000000000000000000000000000:generic_hdd 1:mdisk1:online:managed:0:STGPool_DS3500-1:128.0GB:0000000000000001:ITSO-DS3500:60080e50001 b0b62000007b24e731e6000000000000000000000000000000000:generic_hdd 2:mdisk2:online:managed:0:STGPool_DS3500-1:128.0GB:0000000000000002:ITSO-DS3500:60080e50001 b09e8000006f44e731bdc00000000000000000000000000000000:generic_hdd 3:mdisk3:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000003:ITSO-DS3500:60080e50001 b0b62000007b44e731e8400000000000000000000000000000000:generic_hdd 4:mdisk4:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000004:ITSO-DS3500:60080e50001 b09e8000006f64e731bff00000000000000000000000000000000:generic_hdd 5:mdisk5:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000005:ITSO-DS3500:60080e50001 b0b62000007b64e731ea900000000000000000000000000000000:generic_hdd 6:mdisk6:online:managed:2:STGPool_DS3500-2:10.0GB:0000000000000006:ITSO-DS3500:60080e50001b 09e80000085f4e7d60dd00000000000000000000000000000000:generic_hdd 7:mdisk7:online:managed:3:STGPool_Multi_Tier:10.0GB:0000000000000007:ITSO-DS3500:60080e5000 1b0b620000091f4e7d8c9400000000000000000000000000000000:generic_hdd 8:mdisk8:online:managed:2:STGPool_DS3500-2:10.0GB:0000000000000008:ITSO-DS3500:60080e50001b 09e8000008614e7d8a2c00000000000000000000000000000000:generic_hdd 9:mdisk9:online:managed:3:STGPool_Multi_Tier:10.0GB:0000000000000009:ITSO-DS3500:60080e5000 1b0b62000009214e7d928000000000000000000000000000000000:generic_hdd

At this point, you have completed the tasks that are required to create a new storage pool.

9.2.12 Viewing storage pool information


Use the lsmdiskgrp command, as shown in Example 9-22, to display information about the storage pools that are defined in the SVC.
Example 9-22 lsmdiskgrp command IBM_2145:ITSO_SVC1:admin>lsmdiskgrp -delim : id:name:status:mdisk_count:vdisk_count:capacity:extent_size:free_capacity:virtual_capacity: used_capacity:real_capacity:overallocation:warning:easy_tier:easy_tier_status 0:STGPool_DS3500-1:online:3:11:382.50GB:256:62.50GB:320.00GB:320.00GB:320.00GB:83:0:auto:in active 1:STGPool_DS3500-2:online:3:11:384.00GB:256:262.00GB:122.00GB:122.00GB:122.00GB:31:0:auto:i nactive 2:STGPool_DS5000-1:online:2:0:20.00GB:256:20.00GB:0.00MB:0.00MB:0.00MB:0:0:auto:inactive 3:STGPool_Multi_Tier:online:2:0:20.00GB:256:20.00GB:0.00MB:0.00MB:0.00MB:0:0:auto:inactive

478

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

9.2.13 Renaming a storage pool


Use the chmdiskgrp command to change the name of a storage pool. To verify the change, run the lsmdiskgrp command. Example 9-23 shows both of these commands.
Example 9-23 chmdiskgrp command

IBM_2145:ITSO_SVC1:admin>chmdiskgrp -name STGPool_DS3500-2_new 1 IBM_2145:ITSO_SVC1:admin>lsmdiskgrp -delim : id:name:status:mdisk_count:vdisk_count:capacity:extent_size:free_capacity:virtual_ capacity:used_capacity:real_capacity:overallocation:warning:easy_tier:easy_tier_st atus 0:STGPool_DS3500-1:online:3:11:382.50GB:256:62.50GB:320.00GB:320.00GB:320.00GB:83: 0:auto:inactive 1:STGPool_DS3500-2_new:online:3:11:384.00GB:256:262.00GB:122.00GB:122.00GB:122.00G B:31:0:auto:inactive 2:STGPool_DS5000-1:online:2:0:20.00GB:256:20.00GB:0.00MB:0.00MB:0.00MB:0:0:auto:in active 3:STGPool_Multi_Tier:online:2:0:20.00GB:256:20.00GB:0.00MB:0.00MB:0.00MB:0:0:auto: inactive This command renamed the storage pool STGPool_DS3500-2 to STGPool_DS3500-2_new as shown. Changing the storage pool: The chmdiskgrp command specifies the new name first. You can use letters A to Z, a to z, numbers 0 to 9, the dash (-), and the underscore (_). The new name can be between one and 63 characters in length. However, the new name cannot start with a number, dash, or the word mdiskgrp (because this prefix is reserved for SVC assignment only).

9.2.14 Deleting a storage pool


Use the rmmdiskgrp command to remove a storage pool from the SVC system configuration (Example 9-24).
Example 9-24 rmmdiskgrp IBM_2145:ITSO_SVC1:admin>rmmdiskgrp STGPool_DS3500-2_new

This command removes storage pool STGPool_DS3500-2_new from the SVC system configuration. Removing a storage pool from the SVC system configuration: If there are MDisks within the storage pool, you must use the -force flag to remove the storage pool from the SVC system configuration, for example: rmmdiskgrp STGPool_DS3500-2_new -force Ensure that you definitely want to use this flag, because it destroys all mapping information and data held on the volumes, which cannot be recovered.

Chapter 9. SAN Volume Controller operations using the command-line interface

479

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

9.2.15 Removing MDisks from a storage pool


Use the rmmdisk command to remove an MDisk from a storage pool (Example 9-25).
Example 9-25 rmmdisk command IBM_2145:ITSO_SVC1:admin>rmmdisk -mdisk 8 -force 2

This command removes the MDisk with ID 8 from the storage pool with ID 2. The -force flag is set because there are volumes using this storage pool. Sufficient space: The removal only takes place if there is sufficient space to migrate the volumes data to other extents on other MDisks that remain in the storage pool. After you remove the MDisk from the storage pool, it takes time to change the mode from managed to unmanaged depending on the size of the MDisk you are removing.

9.3 Working with hosts


In this section we explain the tasks that you can perform at a host level. When we create a host in our SVC system, we need to define the connection method. Starting with SVC 5.1, we can now define our host as iSCSI-attached or FC-attached.

9.3.1 Creating a Fibre Channel-attached host


In the following sections we illustrate how to create an FC-attached host under various circumstances.

Host is powered on, connected, and zoned to the SVC


When you create your host on the SVC, it is good practice to check whether the host bus adapter (HBA) worldwide port names (WWPNs) of the server are visible to the SVC. By doing that, you ensure that zoning is done and that the correct WWPN will be used. Issue the lshbaportcandidate command, as shown in Example 9-26.
Example 9-26 lshbaportcandidate command IBM_2145:ITSO_SVC1:admin>lshbaportcandidate id 210000E08B89C1CD 210000E08B054CAA

After you know that the WWPNs that are displayed match your host (use host or SAN switch utilities to verify), use the mkhost command to create your host. Name: If you do not provide the -name parameter, the SVC automatically generates the name hostx (where x is the ID sequence number that is assigned by the SVC internally). You can use the letters A to Z and a to z, the numbers 0 to 9, the dash (-), and the underscore (_). The name can be between one and 63 characters in length. However, the name cannot start with a number, dash, or the word host (because this prefix is reserved for SVC assignment only).

480

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

The command to create a host is shown in Example 9-27.


Example 9-27 mkhost

IBM_2145:ITSO_SVC1:admin>mkhost -name Almaden -hbawwpn 210000E08B89C1CD:210000E08B054CAA Host, id [2], successfully created This command creates a host called Almaden using WWPN 21:00:00:E0:8B:89:C1:CD and 21:00:00:E0:8B:05:4C:AA. Ports: You can define from one up to eight ports per host, or you can use the addport command, which we show in 9.3.5, Adding ports to a defined host on page 484.

Host is not powered on or not connected to the SAN


If you want to create a host on the SVC without seeing your target WWPN by using the lshbaportcandidate command, add the -force flag to your mkhost command, as shown in Example 9-28. This option is more open to human error than if you choose the WWPN from a list, but it is typically used when many host definitions are created at the same time, such as through a script. In this case, you can type the WWPN of your HBA or HBAs and use the -force flag to create the host, regardless of whether they are connected, as shown in Example 9-28.
Example 9-28 mkhost -force

IBM_2145:ITSO_SVC1:admin>mkhost -name Almaden -hbawwpn 210000E08B89C1CD:210000E08B054CAA -force Host, id [2], successfully created This command forces the creation of a host called Almaden using WWPN 210000E08B89C1CD:210000E08B054CAA. Note: WWPNs are not case sensitive in the CLI.

9.3.2 Creating an iSCSI-attached host


Now we can create host definitions to a host that is not connected to the SAN but that has LAN access to our SVC nodes. Before we create the host definition, we configure our SVC systems to use the new iSCSI connection method. We describe additional information about configuring your nodes to use iSCSI in 9.8.3, iSCSI configuration on page 520. The iSCSI functionality allows the host to access volumes through the SVC without being attached to the SAN. Back-end storage and node-to-node communication still need the FC network to communicate, but the host does not necessarily need to be connected to the SAN. When we create a host that is going to use iSCSI as a communication method, iSCSI initiator software must be installed on the host to initiate the communication between the SVC and the host. This installation creates an iSCSI qualified name (IQN) identifier that is needed before we create our host.

Chapter 9. SAN Volume Controller operations using the command-line interface

481

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

Before we start, we check our servers IQN address. We are running Windows Server 2008. We select Start Programs Administrative tools, and we select iSCSI initiator. In our example, our IQN, as shown in Figure 9-1, is: iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com

Figure 9-1 IQN from the iSCSI initiator tool

We create the host by issuing the mkhost command, as shown in Example 9-29. When the command completes successfully, we display our newly created host.
Example 9-29 mkhost command

IBM_2145:ITSO_SVC1:admin>mkhost -name Baldur -iogrp 0 -iscsiname iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com Host, id [4], successfully created IBM_2145:ITSO_SVC1:admin>lshost 4 id 4 name Baldur port_count 1 type generic mask 1111 iogrp_count 1 iscsiname iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com node_logged_in_count 0 state offline It is important to know that when the host is initially configured, the default authentication method is set to no authentication and no Challenge Handshake Authentication Protocol (CHAP) secret is set. To set a CHAP secret for authenticating the iSCSI host with the SVC system, use the chhost command with the chapsecret parameter.

482

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

We have now created our host definition. We map a volume to our new iSCSI server, as shown in Example 9-30. We have already created the volume, as shown in 9.5.1, Creating a volume on page 487. In our scenario, our volume has ID 21 and the host name is Baldur. We map it to our iSCSI host.
Example 9-30 Mapping a volume to the iSCSI host

IBM_2145:ITSO_SVC1:admin>mkvdiskhostmap -host Baldur 21 Virtual Disk to Host map, id [0], successfully created After the volume has been mapped to the host, we display the host information again, as shown in Example 9-31.
Example 9-31 lshost

IBM_2145:ITSO_SVC1:admin>lshost 4 id 4 name Baldur port_count 1 type generic mask 1111 iogrp_count 1 iscsiname iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com node_logged_in_count 1 state online Note: FC hosts and iSCSI hosts are handled in the same way operationally after they have been created. If you need to display a CHAP secret for an already defined server, use the lsiscsiauth command. The lsiscsiauth command lists the Challenge Handshake Authentication Protocol (CHAP) secret configured for authenticating an entity to the SAN Volume Controller system.

9.3.3 Modifying a host


Use the chhost command to change the name of a host. To verify the change, run the lshost command. Example 9-32 shows both of these commands.
Example 9-32 chhost command

IBM_2145:ITSO_SVC1:admin>chhost -name Angola Guinea IBM_2145:ITSO_SVC1:admin>lshost id name 0 Palau 1 Nile 2 Kanaga 3 Siam 4 Angola

port_count 2 2 2 2 1

iogrp_count 4 1 1 2 4

This command renamed the host from Guinea to Angola.

Chapter 9. SAN Volume Controller operations using the command-line interface

483

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

Note: The chhost command specifies the new name first. You can use letters A to Z and a to z, numbers 0 to 9, the dash (-), and the underscore (_). The new name can be between one and 63 characters in length. However, it cannot start with a number, dash, or the word host (because this prefix is reserved for SVC assignment only).

Note: If you use Hewlett-Packard UNIX (HP-UX), you use the -type option. See IBM System Storage Open Software Family SAN Volume Controller: Host Attachment Guide, SC26-7563, for more information about the hosts that require the -type parameter.

9.3.4 Deleting a host


Use the rmhost command to delete a host from the SVC configuration. If your host is still mapped to volumes and you use the -force flag, the host and all of the mappings with it are deleted. The volumes are not deleted, only the mappings to them. The command that is shown in Example 9-33 deletes the host called Angola from the SVC configuration.
Example 9-33 rmhost Angola

IBM_2145:ITSO_SVC1:admin>rmhost Angola Deleting a host: If there are any volume assigned to the host, you must use the -force flag, for example: rmhost -force Angola.

9.3.5 Adding ports to a defined host


If you add an HBA or a network interface controller (NIC) to a server that is already defined within the SVC, you can use the addhostport command to add the new port definitions to your host configuration. If your host is currently connected through SAN with FC and if the WWPN is already zoned to the SVC system, issue the lshbaportcandidate command, as shown in Example 9-34, to compare with the information that you have from the server administrator.
Example 9-34 lshbaportcandidate

IBM_2145:ITSO_SVC1:admin>lshbaportcandidate id 210000E08B054CAA If the WWPN matches your information (use host or SAN switch utilities to verify), use the addhostport command to add the port to the host. Example 9-35 shows the command to add a host port.
Example 9-35 addhostport

IBM_2145:ITSO_SVC1:admin>addhostport -hbawwpn 210000E08B054CAA Palau This command adds the WWPN of 210000E08B054CAA to the Palau host.

484

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

Adding multiple ports: You can add multiple ports all at one time by using the separator or colon (:) between WWPNs, for example: addhostport -hbawwpn 210000E08B054CAA:210000E08B89C1CD Palau If the new HBA is not connected or zoned, the lshbaportcandidate command does not display your WWPN. In this case, you can manually type the WWPN of your HBA or HBAs and use the -force flag to create the host, as shown in Example 9-36.
Example 9-36 addhostport

IBM_2145:ITSO_SVC1:admin>addhostport -hbawwpn 210000E08B054CAA -force Palau This command forces the addition of the WWPN named 210000E08B054CAA to the host called Palau. WWPNs: WWPNs are not case sensitive within the CLI. If you run the lshost command again, you see your host with an updated port count of 2 in Example 9-37.
Example 9-37 lshost command: Port count

IBM_2145:ITSO_SVC1:admin>lshost id name 0 Palau 1 ITSO_W2008 2 Thor 3 Frigg 4 Baldur

port_count 2 1 3 1 1

iogrp_count 4 4 1 1 1

If your host currently uses iSCSI as a connection method, you must have the new iSCSI IQN ID before you add the port. Unlike FC-attached hosts, you cannot check for available candidates with iSCSI. After you have acquired the additional iSCSI IQN, use the addhostport command, as shown in Example 9-38.
Example 9-38 Adding an iSCSI port to an already configured host

IBM_2145:ITSO_SVC1:admin>addhostport -iscsiname iqn.1991-05.com.microsoft:baldur 4

9.3.6 Deleting ports


If you make a mistake when adding a port, or if you remove an HBA from a server that is already defined within the SVC, you can use the rmhostport command to remove WWPN definitions from an existing host. Before you remove the WWPN, be sure that it is the correct WWPN by issuing the lshost command, as shown in Example 9-39.
Example 9-39 lshost command

IBM_2145:ITSO_SVC1:admin>lshost Palau id 0

Chapter 9. SAN Volume Controller operations using the command-line interface

485

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

name Palau port_count 2 type generic mask 1111 iogrp_count 4 WWPN 210000E08B054CAA node_logged_in_count 2 state active WWPN 210000E08B89C1CD node_logged_in_count 2 state offline When you know the WWPN or iSCSI IQN, use the rmhostport command to delete a host port, as shown in Example 9-40.
Example 9-40 rmhostport

For removing WWPN IBM_2145:ITSO_SVC1:admin>rmhostport -hbawwpn 210000E08B89C1CD Palau and for removing iSCSI IQN IBM_2145:ITSO_SVC1:admin>rmhostport -iscsiname iqn.1991-05.com.microsoft:baldur Baldur This command removes the WWPN of 210000E08B89C1CD from the Palau host and the iSCSI IQN iqn.1991-05.com.microsoft:baldur from the Baldur host. Removing multiple ports: You can remove multiple ports at one time by using the separator or colon (:) between the port names, for example: rmhostport -hbawwpn 210000E08B054CAA:210000E08B892BCD Angola

9.4 Working with the Ethernet port for iscsi


This section details commands that are useful for setting, changing, and displaying the SVC Ethernet port for iscsi configuration. Example 9-41 shows the lsportip command listing the iSCSI IP addresses assigned for each port on each node in the system.
Example 9-41 lsportip command

IBM_2145:ITSO_SVC1:admin>lsportip id node_id node_name IP_address gateway IP_address_6 prefix_6 gateway_6 duplex state speed failover 1 1 node1 00:1a:64:95:2f:cc Full unconfigured 1Gb/s 1 1 node1 00:1a:64:95:2f:cc Full unconfigured 1Gb/s 2 1 node1 10.44.36.64 10.44.36.254 00:1a:64:95:2f:ce Full online 1Gb/s

mask MAC

no yes 255.255.255.0 no

486

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

2 1 node1 00:1a:64:95:2f:ce Full 1 2 node2 00:1a:64:95:3f:4c Full 1 2 node2 00:1a:64:95:3f:4c Full 2 2 10.44.36.254 00:1a:64:95:3f:4e Full 2 2 node2 00:1a:64:95:3f:4e Full 1 3 node3 00:21:5e:41:53:18 Full 1 3 node3 00:21:5e:41:53:18 Full 2 3 10.44.36.254 00:21:5e:41:53:1a Full 2 3 node3 00:21:5e:41:53:1a Full 1 4 node4 00:21:5e:41:56:8c Full 1 4 node4 00:21:5e:41:56:8c Full 2 4 10.44.36.254 00:21:5e:41:56:8e Full 2 4 node4 00:21:5e:41:56:8e Full

online unconfigured

1Gb/s 1Gb/s

yes no yes 255.255.255.0 no yes no yes 255.255.255.0 no yes no yes 255.255.255.0 no yes

node2

unconfigured 1Gb/s 10.44.36.65 online online unconfigured 1Gb/s 1Gb/s 1Gb/s

node3

unconfigured 1Gb/s 10.44.36.60 online online unconfigured 1Gb/s 1Gb/s 1Gb/s

node4

unconfigured 1Gb/s 10.44.36.63 online online 1Gb/s 1Gb/s

Example 9-42 shows how the cfgportip command assigns an IP address to each node Ethernet port for iSCSI I/O.
Example 9-42 cfgportip command

IBM_2145:ITSO_SVC1:admin>cfgportip -node 4 -ip 10.44.36.63 -gw 10.44.36.254 -mask 255.255.255.0 2 IBM_2145:ITSO_SVC1:admin>cfgportip -node 1 -ip 10.44.36.64 -gw 10.44.36.254 -mask 255.255.255.0 2 IBM_2145:ITSO_SVC1:admin>cfgportip -node 2 -ip 10.44.36.65 -gw 10.44.36.254 -mask 255.255.255.0 2

9.5 Working with volumes


This section details the various configuration and administration tasks that can be performed on the volume within the SVC environment.

9.5.1 Creating a volume


The mkvdisk command creates sequential, striped, or image mode volume objects. When they are mapped to a host object, these objects are seen as disk drives with which the host can perform I/O operations.

Chapter 9. SAN Volume Controller operations using the command-line interface

487

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

When creating a volume, you must enter several parameters at the CLI. There are both mandatory and optional parameters. See the full command string and detailed information in Command-Line Interface Users Guide, SC27-2287. Creating an image mode disk: If you do not specify the -size parameter when you create an image mode disk, the entire MDisk capacity is used. When you are ready to create a volume, you must know the following information before you start creating the volume: In which storage pool the volume is going to have its extents From which I/O Group the volume will be accessed Which SVC node will be the preferred node for the volume Size of the volume Name of the volume Type of the volume Whether this volume will be managed by Easy Tier to optimize its performance When you are ready to create your striped volume, use the mkvdisk command (we discuss sequential and image mode volume later). In Example 9-43, this command creates a 10 GB striped volume with volume id7 within the storage pool STGPool_DS3500-2 and assigns it to the iogrp_0 I/O Group. Its preferred node will be node 1.
Example 9-43 mkvdisk command

IBM_2145:ITSO_SVC1:admin>mkvdisk -mdiskgrp STGPool_DS3500-2 -iogrp io_grp0 -node 1 -size 10 -unit gb -name Tiger Virtual Disk, id [20], successfully created

To verify the results use the lsvdisk command, as shown in Example 9-44.
Example 9-44 lsvdisk command IBM_2145:ITSO_SVC1:admin>lsvdisk 20 id 20 name Tiger IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 1 mdisk_grp_name STGPool_DS3500-2 capacity 10.00GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 6005076801AF813F1000000000000016 throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite udid

488

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

fc_map_count 0 sync_rate 50 copy_count 1 se_copy_count 0 filesystem mirror_write_priority latency copy_id 0 status online sync yes primary yes mdisk_grp_id 1 mdisk_grp_name STGPool_DS3500-2 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 10.00GB real_capacity 10.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize se_copy no easy_tier on easy_tier_status inactive tier generic_ssd tier_capacity 0.00MB tier generic_hdd tier_capacity 10.00GB

At this point, you have completed the required tasks to create a volume.

9.5.2 Volume information


Use the lsvdisk command to display summary information about all volumes defined within the SVC environment. To display more detailed information about a specific volume, run the command again and append the volume name parameter or the volume ID. Example 9-45 shows both of these commands.
Example 9-45 lsvdisk command IBM_2145:ITSO_SVC1:admin>lsvdisk id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state se_copy_count RC_change 0 Volume_A 0 io_grp0 online 0 Pool_DS3500-1 10.00GB striped 0 GMREL1 6005076801AF813F1000000000000031 0 1 empty 0 0 no 1 Volume_B 0 io_grp0 online 0 Pool_DS3500-1 10.00GB striped 1 GMREL2 6005076801AF813F1000000000000032 0 1 empty 0 0 no 2 Volume_C 0 io_grp0 online 0 Pool_DS3500-1 10.00GB striped 2 GMREL3 6005076801AF813F1000000000000033 0 1 empty 0 0 no IBM_2145:ITSO_SVC1:admin>lsvdisk Volume_A id 0

Chapter 9. SAN Volume Controller operations using the command-line interface

489

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

name Volume_A IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name Pool_DS3500-1 capacity 10.00GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id 0 RC_name GMREL1 vdisk_UID 6005076801AF813F1000000000000031 throttling 0 preferred_node_id 2 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 se_copy_count 0 filesystem mirror_write_priority latency copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name Pool_DS3500-1 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 10.00GB real_capacity 10.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize se_copy no easy_tier on easy_tier_status inactive tier generic_ssd tier_capacity 0.00MB tier generic_hdd tier_capacity 10.00GB

490

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

9.5.3 Creating a thin-provisioned volume


Example 9-46 shows how to create a thin-provisioned volume. In addition to the normal parameters, you must use the following parameters: -rsize -autoexpand This parameter makes the volume a thin-provisioned volume; otherwise, the volume is fully allocated. This parameter specifies that thin-provisioned volume copies automatically expand their real capacities by allocating new extents from their storage pool. This parameter sets the grain size (in KB) for a thin-provisioned volume.

-grainsize

Example 9-46 Usage of the command mkvdisk IBM_2145:ITSO_SVC1:admin>mkvdisk -mdiskgrp STGPool_DS3500-2 -iogrp 0 -vtype striped -size 10 -unit gb -rsize 50% -autoexpand -grainsize 32 Virtual Disk, id [21], successfully created

This command creates a space-efficient 10 GB volume. The volume belongs to the storage pool named STGPool_DS3500-2 and is owned by the io_grp1 I/O Group. The real capacity automatically expands until the volume size of 10 GB is reached. The grain size is set to 32 K, which is the default. Disk size: When using the -rsize parameter, you have the following options: disk_size, disk_size_percentage, and auto. Specify the disk_size_percentage value using an integer, or an integer immediately followed by the percent (%) symbol. Specify the units for a disk_size integer using the -unit parameter; the default is MB. The -rsize value can be greater than, equal to, or less than the size of the volume. The auto option creates a volume copy that uses the entire size of the MDisk. If you specify the -rsize auto option, you must also specify the -vtype image option. An entry of 1 GB uses 1024 MB.

9.5.4 Creating a volume in image mode


This virtualization type allows an image mode volume to be created when an MDisk already has data on it, perhaps from a previrtualized subsystem. When an image mode volume is created, it directly corresponds to the previously unmanaged MDisk from which it was created. Therefore, with the exception of thin-provisioned image mode volume, the volumes logical block address (LBA) x equals MDisk LBA x. You can use this command to bring a non-virtualized disk under the control of the clustered system. After it is under the control of the clustered system, you can migrate the volume from the single managed disk. As soon as the first MDisk extent has been migrated, the volume is no longer an image mode volume. You can add an image mode volume to an already populated storage pool with other types of volume, such as a striped or sequential volume.

Chapter 9. SAN Volume Controller operations using the command-line interface

491

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

Size: An image mode volume must be at least 512 bytes (the capacity cannot be 0). That is, the minimum size that can be specified for an image mode volume must be the same as the storage pool extent size to which it is added, with a minimum of 16 MB. You must use the -mdisk parameter to specify an MDisk that has a mode of unmanaged. The -fmtdisk parameter cannot be used to create an image mode volume. Capacity: If you create a mirrored volume from two image mode MDisks without specifying a -capacity value, the capacity of the resulting volume is the smaller of the two MDisks, and the remaining space on the larger MDisk is inaccessible. If you do not specify the -size parameter when you create an image mode disk, the entire MDisk capacity is used. Use the mkvdisk command to create an image mode volume, as shown in Example 9-47.
Example 9-47 mkvdisk (image mode) IBM_2145:ITSO_SVC1:admin>mkvdisk -mdiskgrp STGPool_DS3500-1 -iogrp 0 -mdisk mdisk10 -vtype image -name Image_Volume_A Virtual Disk, id [22], successfully created

This command creates an image mode volume called Image_Volume_A using the mdisk10 MDisk. The volume belongs to the storage pool STGPool_DS3500-1 and is owned by the io_grp0 I/O Group. If we run the lsvdisk command again, notice that volume named Image_Volume_A has a status of image, as shown in Example 9-48.
Example 9-48 lsvdisk IBM_2145:ITSO_SVC1:admin>lsvdisk -filtervalue type=image id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state se_copy_count RC_change 22 Image_Volume_A 0 io_grp0 online 0 STGPool_DS3500-1 10.00GB image 6005076801AF813F1000000000000018 0 1 empty 0 no

9.5.5 Adding a mirrored volume copy


You can create a mirrored copy of a volume, which keeps a volume accessible even when the MDisk on which it depends has become unavailable. You can create a copy of a volume either on separate storage pools or by creating an image mode copy of the volume. Copies increase the availability of data; however, they are not separate objects. You can only create or change mirrored copies from the volume. In addition, you can use volume mirroring as an alternative method of migrating volumes between storage pools. For example, if you have a non-mirrored volume in one storage pool and want to migrate that volume to another storage pool, you can add a new copy of the volume and specify the second storage pool. After the copies are synchronized, you can delete the copy on the first

492

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

storage pool. The volume is copied to the second storage pool while remaining online during the copy. To create a mirrored copy of a volume, use the addvdiskcopy command. This command adds a copy of the chosen volume to the selected storage pool, which changes a non-mirrored volume into a mirrored volume. In the following scenario, we show creating a mirrored volume from one storage pool to another storage pool. As you can see in Example 9-49, the volume has a copy with copy_id 0.
Example 9-49 lsvdisk IBM_2145:ITSO_SVC1:admin>lsvdisk Volume_no_mirror id 23 name Volume_no_mirror IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name STGPool_DS3500-1 capacity 1.00GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 6005076801AF813F1000000000000019 throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 se_copy_count 0 filesystem mirror_write_priority latency copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name STGPool_DS3500-1 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 1.00GB real_capacity 1.00GB free_capacity 0.00MB overallocation 100 autoexpand warning

Chapter 9. SAN Volume Controller operations using the command-line interface

493

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

grainsize se_copy no easy_tier on easy_tier_status inactive tier generic_ssd tier_capacity 0.00MB tier generic_hdd tier_capacity 1.00GB

In Example 9-50, we add the volume copy mirror by using the addvdiskcopy command.
Example 9-50 addvdiskcopy IBM_2145:ITSO_SVC1:admin>addvdiskcopy -mdiskgrp STGPool_DS5000-1 -vtype striped -unit gb Volume_no_mirror Vdisk [23] copy [1] successfully created

During the synchronization process, you can see the status by using the lsvdisksyncprogress command. As shown in Example 9-51, the first time that the status is checked, the synchronization progress is at 48%, and the estimated completion time is 11:09:26. The second time that the command is run, the progress status is at 100%, and the synchronization is complete.
Example 9-51 Synchronization IBM_2145:ITSO_SVC1:admin>lsvdisksyncprogress vdisk_id vdisk_name copy_id progress estimated_completion_time 23 Volume_no_mirror 1 48 110926203918 IBM_2145:ITSO_SVC1:admin>lsvdisksyncprogress vdisk_id vdisk_name copy_id progress estimated_completion_time 23 Volume_no_mirror 1 100

As you can see in Example 9-52, the new mirrored volume copy (copy_id 1) has been added and can be seen by using the lsvdisk command.
Example 9-52 lsvdisk IBM_2145:ITSO_SVC1:admin>lsvdisk 23 id 23 name Volume_no_mirror IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id many mdisk_grp_name many capacity 1.00GB type many formatted no mdisk_id many mdisk_name many FC_id FC_name RC_id RC_name vdisk_UID 6005076801AF813F1000000000000019 throttling 0 preferred_node_id 1

494

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 2 se_copy_count 0 filesystem mirror_write_priority latency copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name STGPool_DS3500-1 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 1.00GB real_capacity 1.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize se_copy no easy_tier on easy_tier_status inactive tier generic_ssd tier_capacity 0.00MB tier generic_hdd tier_capacity 1.00GB copy_id 1 status online sync yes primary no mdisk_grp_id 2 mdisk_grp_name STGPool_DS5000-1 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 1.00GB real_capacity 1.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize se_copy no easy_tier on easy_tier_status inactive tier generic_ssd tier_capacity 0.00MB tier generic_hdd

Chapter 9. SAN Volume Controller operations using the command-line interface

495

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

tier_capacity 1.00GB

While adding a volume copy mirror, you can define a mirror with different parameters to the volume copy. Therefore, you can define a thin-provisioned volume copy for a non-volume copy volume and vice versa, which is one way to migrate a non-thin-provisioned volume to a thin-provisioned volume. Note: To change the parameters of a volume copy mirror, you must delete the volume copy and redefine it with the new values. Now we can change the name of the volume just mirrored from Volume_no_mirror to Volume_mirrored, as shown in Example 9-53.
Example 9-53 Volume name changing

IBM_2145:ITSO_SVC1:admin>chvdisk -name Volume_mirrored Volume_no_mirror

9.5.6 Splitting a mirrored volume


The splitvdiskcopy command creates a new volume in the specified I/O Group from a copy of the specified volume. If the copy that you are splitting is not synchronized, you must use the -force parameter. The command fails if you are attempting to remove the only synchronized copy. To avoid this failure, wait for the copy to synchronize, or split the unsynchronized copy from the volume by using the -force parameter. You can run the command when either volume copy is offline. Example 9-54 shows the splitvdiskcopy command, which is used to split a mirrored volume. It creates a new volume, Volume_new from Volume_mirrored.
Example 9-54 Split volume IBM_2145:ITSO_SVC1:admin>splitvdiskcopy -copy 1 -iogrp 0 -name Volume_new Volume_mirrored Virtual Disk, id [24], successfully created

As you can see in Example 9-55, the new volume named Volume_new, has been created as an independent volume.
Example 9-55 lsvdisk IBM_2145:ITSO_SVC1:admin>lsvdisk Volume_new id 24 name Volume_new IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 2 mdisk_grp_name STGPool_DS5000-1 capacity 1.00GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id

496

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

RC_name vdisk_UID 6005076801AF813F100000000000001A throttling 0 preferred_node_id 2 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 se_copy_count 0 filesystem mirror_write_priority latency copy_id 0 status online sync yes primary yes mdisk_grp_id 2 mdisk_grp_name STGPool_DS5000-1 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 1.00GB real_capacity 1.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize se_copy no easy_tier on easy_tier_status inactive tier generic_ssd tier_capacity 0.00MB tier generic_hdd tier_capacity 1.00GB

By issuing the command in Example 9-54 on page 496,Volume_mirrored will no longer have its mirrored copy and a new volume will be created automatically.

9.5.7 Modifying a volume


Executing the chvdisk command will modify a single property of a volume. Only one property can be modified at a time. So, changing the name and modifying the I/O Group require two invocations of the command. You can specify a new name or label. The new name can be used subsequently to reference the volume. The I/O Group with which this volume is associated can be changed. Note that this requires a flush of the cache within the nodes in the current I/O Group to ensure that all data is written to disk. I/O must be suspended at the host level before performing this operation.

Chapter 9. SAN Volume Controller operations using the command-line interface

497

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

Tips: If the volume has a mapping to any hosts, it is not possible to move the volume to an I/O Group that does not include any of those hosts. This operation will fail if there is not enough space to allocate bitmaps for a mirrored volume in the target I/O Group. If the -force parameter is used and the system is unable to destage all write data from the cache, the contents of the volume are corrupted by the loss of the cached data. If the -force parameter is used to move a volume that has out-of-sync copies, a full resynchronization is required.

9.5.8 I/O governing


You can set a limit on the number of I/O operations accepted for a volume. The limit is set in terms of I/Os per second or MB per second. By default, no I/O governing rate is set when a volume is created. Base the choice between I/O and MB as the I/O governing throttle on the disk access profile of the application. Database applications generally issue large amounts of I/O, but they only transfer a relatively small amount of data. In this case, setting an I/O governing throttle based on MB per second does not achieve much. It is better to use an I/Os per second as a second throttle. At the other extreme, a streaming video application generally issues a small amount of I/O, but transfers large amounts of data. In contrast to the database example, setting an I/O governing throttle based on I/Os per second does not achieve much, so it is better to use an MB per second throttle. I/O governing rate: An I/O governing rate of 0 (displayed as throttling in the CLI output of the lsvdisk command) does not mean that zero I/Os per second (or MB per second) can be achieved. It means that no throttle is set. An example of the chvdisk command is shown in Example 9-56.
Example 9-56 chvdisk

IBM_2145:ITSO_SVC1:admin>chvdisk -rate 20 -unitmb volume_7 IBM_2145:ITSO_SVC1:admin>chvdisk -warning 85% volume_7 New name: The chvdisk command specifies the new name first. The name can consist of letters A to Z and a to z, numbers 0 to 9, the dash (-), and the underscore (_). It can be between one and 63 characters in length. However, it cannot start with a number, the dash, or the word vdisk (because this prefix is reserved for SVC assignment only). The first command changes the volume throttling of volume_7 to 20 MBps. The second command changes the thin-provisioned volume warning to 85%. To verify the changes, issue the lsvdisk command as shown in Example 9-57.
Example 9-57 lsvdisk command: Verifying throttling IBM_2145:ITSO_SVC1:admin>lsvdisk volume_7 id 1 name volume_7

498

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name STGPool_DS3500-1 capacity 10.00GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 6005076801AF813F100000000000001F virtual_disk_throttling (MB) 20 preferred_node_id 2 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 se_copy_count 1 filesystem mirror_write_priority latency copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name STGPool_DS3500-1 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 2.02GB free_capacity 2.02GB overallocation 496 autoexpand on warning 85 grainsize 32 se_copy yes easy_tier on easy_tier_status inactive tier generic_ssd tier_capacity 0.00MB tier generic_hdd tier_capacity 2.02GB

Chapter 9. SAN Volume Controller operations using the command-line interface

499

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

9.5.9 Deleting a volume


When executing this command on an existing fully managed mode volume, any data that remained on it will be lost. The extents that made up this volume will be returned to the pool of free extents available in the storage pool. If any Remote Copy, FlashCopy, or host mappings still exist for this volume, the delete fails unless the -force flag is specified. This flag ensures the deletion of the volume and any volume to host mappings and copy mappings. If the volume is currently the subject of a migrate to image mode, the delete fails unless the -force flag is specified. This flag halts the migration and then deletes the volume. If the command succeeds (without the -force flag) for an image mode volume, the underlying back-end controller logical unit will be consistent with the data that a host might previously have read from the image mode volume. That is, all fast write data has been flushed to the underlying LUN. If the -force flag is used, there is no guarantee. If there is any non-destaged data in the fast write cache for this volume, the deletion of the volume fails unless the -force flag is specified. Now, any non-destaged data in the fast write cache is deleted. Use the rmvdisk command to delete a volume from your SVC configuration, as shown in Example 9-58.
Example 9-58 rmvdisk

IBM_2145:ITSO_SVC1:admin>rmvdisk volume_A This command deletes the volume_A volume from the SVC configuration. If the volume is assigned to a host, you need to use the -force flag to delete the volume (Example 9-59).
Example 9-59 rmvdisk (-force)

IBM_2145:ITSO_SVC1:admin>rmvdisk -force volume_A

9.5.10 Expanding a volume


Expanding a volume presents a larger capacity disk to your operating system. Although this expansion can be easily performed using the SVC, you must ensure that your operating systems support expansion before using this function. Assuming your operating systems support it, you can use the expandvdisksize command to increase the capacity of a given volume. Example 9-60 shows a sample of this command.
Example 9-60 expandvdisksize

IBM_2145:ITSO_SVC1:admin>expandvdisksize -size 5 -unit gb volume_C This command expands the volume_C volume, which was 35GB before, by another 5 GB to give it a total size of 40GB. To expand a thin-provisioned volume, you can use the -rsize option, as shown in Example 9-61 on page 501. This command changes the real size of the volume_B volume to a real capacity of 55 GB. The capacity of the volume remains unchanged.

500

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

Example 9-61 lsvdisk IBM_2145:ITSO_SVC1:admin>lsvdisk volume_B id 26 capacity 100.00GB type striped .

. copy_id 0
status online used_capacity 0.41MB real_capacity 50.02GB free_capacity 50.02GB overallocation 199 autoexpand on warning 80 grainsize 32 se_copy yes IBM_2145:ITSO_SVC1:admin>expandvdisksize -rsize 5 -unit gb volume_B IBM_2145:ITSO_SVC1:admin>lsvdisk volume_B id 26 name volume_B capacity 100.00GB type striped

. .
copy_id 0 status online used_capacity 0.41MB real_capacity 55.02GB free_capacity 55.02GB overallocation 181 autoexpand on warning 80 grainsize 32 se_copy yes

Important: If a volume is expanded, its type will become striped even if it was previously sequential or in image mode. If there are not enough extents to expand your volume to the specified size, you receive the following error message: CMMVC5860E Ic_failed_vg_insufficient_virtual_extents

9.5.11 Assigning a volume to a host


Use the mkvdiskhostmap command to map a volume to a host. When executed, this command creates a new mapping between the volume and the specified host, which essentially presents this volume to the host as though the disk was directly attached to the host. It is only after this command is executed that the host can perform I/O to the volume. Optionally, a SCSI LUN ID can be assigned to the mapping. When the HBA on the host scans for devices that are attached to it, it discovers all of the volumes that are mapped to its FC ports. When the devices are found, each one is allocated an identifier (SCSI LUN ID). For example, the first disk found is generally SCSI LUN 1, and so on. You can control the order in which the HBA discovers volumes by assigning the SCSI LUN ID as required. If you

Chapter 9. SAN Volume Controller operations using the command-line interface

501

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

do not specify a SCSI LUN ID, the system automatically assigns the next available SCSI LUN ID, given any mappings that already exist with that host. Using the volume and host definition that we created in the previous sections, we assign volumes to hosts that are ready for their use. We use the mkvdiskhostmap command (see Example 9-62).
Example 9-62 mkvdiskhostmap IBM_2145:ITSO_SVC1:admin>mkvdiskhostmap -host Almaden volume_B Virtual Disk to Host map, id [0], successfully created IBM_2145:ITSO_SVC1:admin>mkvdiskhostmap -host Almaden volume_C Virtual Disk to Host map, id [1], successfully created

This command display volume_B and volume_C assigned to host Almaden as shown in Example 9-63.
Example 9-63 lshostvdiskmap -delim, command IBM_2145:ITSO_SVC1:admin>lshostvdiskmap -delim : id:name:SCSI_id:vdisk_id:vdisk_name:vdisk_UID 2:Almaden:0:26:volume_B:6005076801AF813F1000000000000020 2:Almaden:1:27:volume_C:6005076801AF813F1000000000000021

Assigning a specific LUN ID to a volume: The optional -scsi scsi_num parameter can help assign a specific LUN ID to a volume that is to be associated with a given host. The default (if nothing is specified) is to increment based on what is already assigned to the host. Be aware that certain HBA device drivers stop when they find a gap in the SCSI LUN IDs. For example: Volume 1 is mapped to Host 1 with SCSI LUN ID 1. Volume 2 is mapped to Host 1 with SCSI LUN ID 2. Volume 3 is mapped to Host 1 with SCSI LUN ID 4. When the device driver scans the HBA, it might stop after discovering Volumes 1 and 2, because there is no SCSI LUN mapped with ID 3. Important: Ensure that the SCSI LUN ID allocation is contiguous. It is not possible to map a volume to a host more than one time at separate LUNs (Example 9-64).
Example 9-64 mkvdiskhostmap

IBM_2145:ITSO_SVC1:admin>mkvdiskhostmap -host Siam volume_A Virtual Disk to Host map, id [0], successfully created This command maps the volume called volume_A to the host called Siam. At this point, you have completed all tasks that are required to assign a volume to an attached host.

502

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

9.5.12 Showing volumes to host mapping


Use the lshostvdiskmap command to show which volumes are assigned to a specific host (Example 9-65).
Example 9-65 lshostvdiskmap

IBM_2145:ITSO_SVC1:admin>lshostvdiskmap -delim , Siam id,name,SCSI_id,vdisk_id,vdisk_name,wwpn,vdisk_UID 3,Siam,0,0,volume_A,210000E08B18FF8A,60050768018301BF280000000000000C From this command, you can see that the host Siam has only one assigned volume called volume_A. The SCSI LUN ID is also shown, which is the ID by which the volume is presented to the host. If no host is specified, all defined host to volume mappings will be returned. Specifying the flag before the host name: Although the -delim flag normally comes at the end of the command string, in this case, you must specify this flag before the host name. Otherwise, it returns the following message: CMMVC6070E An invalid or duplicated parameter, unaccompanied argument, or incorrect argument sequence has been detected. Ensure that the input is as per the help.

9.5.13 Deleting a volume to host mapping


When deleting a volume mapping, you are not deleting the volume itself, only the connection from the host to the volume. If you mapped a volume to a host by mistake, or you simply want to reassign the volume to another host, use the rmvdiskhostmap command to unmap a volume from a host (Example 9-66).
Example 9-66 rmvdiskhostmap

IBM_2145:ITSO_SVC1:admin>rmvdiskhostmap -host Tiger volume_D This command unmaps the volume called volume_D from the host called Tiger.

9.5.14 Migrating a volume


From time to time, you might want to migrate volumes from one set of MDisks to another set of MDisks to decommission an old disk subsystem, to have better balanced performance across your virtualized environment, or simply to migrate data into the SVC environment transparently using image mode. You can obtain further information about migration in Chapter 6, Data migration on page 227. Important: After migration is started, it continues until completion unless it is stopped or suspended by an error condition or unless the volume being migrated is deleted. As you can see from the parameters shown in Example 9-67 on page 504, before you can migrate your volume you must know the name of the volume you want to migrate and the name of the storage pool to which you want to migrate. To discover the names, run the lsvdisk and lsmdiskgrp commands.

Chapter 9. SAN Volume Controller operations using the command-line interface

503

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

After you know these details you can issue the migratevdisk command, as shown in Example 9-67.
Example 9-67 migratevdisk

IBM_2145:ITSO_SVC1:admin>migratevdisk -mdiskgrp STGPool_DS5000-1 -vdisk volume_C This command moves volume_C to the storage pool named STGPool_DS5000-1. Tips: If insufficient extents are available within your target storage pool, you receive an error message. Make sure that the source and target MDisk group have the same extent size. The optional threads parameter allows you to assign a priority to the migration process. The default is 4, which is the highest priority setting. However, if you want the process to take a lower priority over other types of I/O, you can specify 3, 2, or 1. You can run the lsmigrate command at any time to see the status of the migration process (Example 9-68).
Example 9-68 lsmigrate command IBM_2145:ITSO_SVC1:admin>lsmigrate migrate_type MDisk_Group_Migration progress 0 migrate_source_vdisk_index 27 migrate_target_mdisk_grp 2 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO_SVC1:admin>lsmigrate migrate_type MDisk_Group_Migration progress 76 migrate_source_vdisk_index 27 migrate_target_mdisk_grp 2 max_thread_count 4 migrate_source_vdisk_copy_id 0

Progress: The progress is given as percent complete. If you receive no more replies, it means that the process has finished.

9.5.15 Migrating a fully managed volume to an image mode volume


Migrating a fully managed volume to an image mode volume allows the SVC to be removed from the data path, which might be useful where the SVC is used as a data mover appliance. You can use the migratetoimage command. To migrate a fully managed volume to an image mode volume, the following rules apply: The destination MDisk must be greater than or equal to the size of the volume. The MDisk that is specified as the target must be in an unmanaged state. Regardless of the mode in which the volume starts, it is reported as managed mode during the migration.

504

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

Both of the MDisks involved are reported as being image mode during the migration. If the migration is interrupted by a system recovery or by a cache problem, the migration resumes after the recovery completes. Example 9-69 shows an example of the command.
Example 9-69 migratetoimage IBM_2145:ITSO_SVC1:admin>migratetoimage -vdisk volume_A -mdisk mdisk10 -mdiskgrp STGPool_IMAGE

In this example, you migrate the data from volume_A onto mdisk10, and the MDisk must be put into the STGPool_IMAGE storage pool.

9.5.16 Shrinking a volume


The shrinkvdisksize command reduces the capacity that is allocated to the particular volume by the amount that you specify. You cannot shrink the real size of a thin-provisioned volume to less than its used size. All capacities, including changes, must be in multiples of 512 bytes. An entire extent is reserved even if it is only partially used. The default capacity units are MB. The command can be used to shrink the physical capacity that is allocated to a particular volume by the specified amount. The command can also be used to shrink the virtual capacity of a thin-provisioned volume without altering the physical capacity assigned to the volume: For a non thin-provisioned volume, use the -size parameter. For a thin-provisioned volume real capacity, use the -rsize parameter. For the thin-provisioned volume virtual capacity, use the -size parameter. When the virtual capacity of a thin-provisioned volume is changed, the warning threshold is automatically scaled to match. The new threshold is stored as a percentage. The system arbitrarily reduces the capacity of the volume by removing a partial extent, one extent, or multiple extents from those extents that are allocated to the volume. You cannot control which extents are removed, and therefore you cannot assume that it is unused space that is removed. Note that image mode volumes cannot be reduced in size. Instead, they must first be migrated to fully Managed Mode. To run the shrinkvdisksize command on a mirrored volume, all copies of the volume must be synchronized. Important: If the volume contains data, do not shrink the disk. Certain operating systems or file systems use what they consider to be the outer edge of the disk for performance reasons. This command can shrink a FlashCopy target volume to the same capacity as the source. Before you shrink a volume, validate that the volume is not mapped to any host objects. If the volume is mapped, data is displayed. You can determine the exact capacity of the source or master volume by issuing the svcinfo lsvdisk -bytes vdiskname command. Shrink the volume by the required amount by issuing the shrinkvdisksize -size disk_size -unit b | kb | mb | gb | tb | pb vdisk_name | vdisk_id command.

Chapter 9. SAN Volume Controller operations using the command-line interface

505

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

Assuming your operating system supports it, you can use the shrinkvdisksize command to decrease the capacity of a given volume. Example 9-70 shows an example of this command.
Example 9-70 shrinkvdisksize IBM_2145:ITSO_SVC1:admin>shrinkvdisksize -size 44 -unit gb volume_D

This command shrinks a volume called volume_D from a total size of 80 GB, by 44 GB, to a new total size of 36 GB.

9.5.17 Showing a volume on an MDisk


Use the lsmdiskmember command to display information about the volume that is using space on a specific MDisk, as shown in Example 9-71.
Example 9-71 lsmdiskmember command IBM_2145:ITSO_SVC1:admin>lsmdiskmember mdisk8 id copy_id 24 0 27 0

This command displays a list of all of the volume IDs that correspond to the volume copies that use mdisk8. To correlate the IDs displayed in this output to volume names we can run the lsvdisk command, which we discuss in more detail in 9.5, Working with volumes on page 487.

9.5.18 Showing which volumes are using a storage pool


Use the lsvdisk -filtervalue command, as shown in Example 9-72, to see which volumes are part of a specific storage pool. This command shows all of the volumes that are part of the storage pool named STGPool_DS3500_2.
Example 9-72 lsvdisk -filtervalue: VDisks in the MDG IBM_2145:ITSO_SVC1:admin>lsvdisk -filtervalue mdisk_grp_name=STGPool_DS3500-2 -delim , id,name,IO_group_id,IO_group_name,status,mdisk_grp_id,mdisk_grp_name,capacity,type,FC_id,FC _name,RC_id,RC_name,vdisk_UID,fc_map_count,copy_count,fast_write_state,se_copy_count,RC_cha nge 7,W2K3_SRV2_VOL01,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F1 000000000000008,0,1,empty,0,0,no 8,W2K3_SRV2_VOL02,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F1 000000000000009,0,1,empty,0,0,no 9,W2K3_SRV2_VOL03,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F1 00000000000000A,0,1,empty,0,0,no 10,W2K3_SRV2_VOL04,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F 100000000000000B,0,1,empty,0,0,no 11,W2K3_SRV2_VOL05,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F 100000000000000C,0,1,empty,0,0,no 12,W2K3_SRV2_VOL06,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F 100000000000000D,0,1,empty,0,0,no 16,AIX_SRV2_VOL01,0,io_grp0,online,1,STGPool_DS3500-2,20.00GB,striped,,,,,6005076801AF813F1 000000000000011,0,1,empty,0,0,no

506

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

9.5.19 Showing which MDisks are used by a specific volume


Use the lsvdiskmember command, as shown in Example 9-73, to show which MDisks a specific volumes extents are from.
Example 9-73 lsvdiskmember command

IBM_2145:ITSO_SVC1:admin>lsvdiskmember 0 id 4 5 6 7 If you want to know more about these MDisks you can run the lsmdisk command, as explained in 9.2, Working with managed disks and disk controller systems on page 470 (using the ID displayed in Example 9-73 rather than the name).

9.5.20 Showing from which storage pool a volume has its extents
Use the lsvdisk command as shown in Example 9-74 to show to which storage pool a specific volume belongs.
Example 9-74 lsvdisk command: storage pool name IBM_2145:ITSO_SVC1:admin>lsvdisk Volume_D id 25 name Volume_D IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name STGPool_DS3500-1 capacity 10.00GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 6005076801AF813F100000000000001E throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 se_copy_count 1 filesystem mirror_write_priority latency copy_id 0 status online sync yes primary yes

Chapter 9. SAN Volume Controller operations using the command-line interface

507

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

mdisk_grp_id 0 mdisk_grp_name STGPool_DS3500-1 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 2.02GB free_capacity 2.02GB overallocation 496 autoexpand on warning 80 grainsize 32 se_copy yes easy_tier on easy_tier_status inactive tier generic_ssd tier_capacity 0.00MB tier generic_hdd tier_capacity 2.02GB

To learn more about these storage pools you can run the lsmdiskgrp command, as explained in 9.2.10, Working with a storage pool on page 476.

9.5.21 Showing the host to which the volume is mapped


To show the hosts to which a specific volume has been assigned, run the lsvdiskhostmap command as shown in Example 9-75.
Example 9-75 lsvdiskhostmap command IBM_2145:ITSO_SVC1:admin>lsvdiskhostmap -delim , volume_B id,name,SCSI_id,host_id,host_name,vdisk_UID 26,volume_B,0,2,Almaden,6005076801AF813F1000000000000020

This command shows the host or hosts to which the volume_B volume was mapped. It is normal for you to see duplicate entries, because there are more paths between the clustered system and the host. To be sure that the operating system on the host sees the disk only one time, you must install and configure a multipath software application, such as the IBM Subsystem Driver (SDD). Specifying the -delim flag: Although the optional -delim flag normally comes at the end of the command string, in this case, you must specify this flag before the volume name. Otherwise, the command does not return any data.

9.5.22 Showing the volume to which the host is mapped


To show the volume to which a specific host has been assigned, run the lshostvdiskmap command, as shown in Example 9-76.
Example 9-76 lshostvdiskmap command example IBM_2145:ITSO_SVC1:admin>lshostvdiskmap -delim , Almaden id,name,SCSI_id,vdisk_id,vdisk_name,vdisk_UID 2,Almaden,0,26,volume_B,60050768018301BF2800000000000005 2,Almaden,1,27,volume_A,60050768018301BF2800000000000004

508

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

This command shows which volumes are mapped to the host called Almaden. Specifying the -delim flag: Although the optional -delim flag normally comes at the end of the command string, in this case you must specify this flag before the volume name. Otherwise, the command does not return any data.

9.5.23 Tracing a volume from a host back to its physical disk


In many cases you must verify exactly which physical disk is presented to the host; for example, from which storage pool a specific volume comes. However, from the host side it is not possible for the server administrator using the GUI to see on which physical disks the volumes are running. Instead, you must enter the command (listed in Example 9-77) from your multipath command prompt. 1. On your host, run the datapath query device command. You see a long disk serial number for each vpath device, as shown in Example 9-77.
Example 9-77 datapath query device

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 60050768018301BF2800000000000005 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 20 0 1 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 2343 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 60050768018301BF2800000000000004 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 2335 0 1 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0 DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 60050768018301BF2800000000000006 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 2331 0 1 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0 State: In Example 9-77 the state of each path is OPEN. Sometimes you will see the state CLOSED. This does not necessarily indicate a problem, because it might be a result of the paths processing stage. 2. Run the lshostvdiskmap command to return a list of all assigned volumes (Example 9-78).
Example 9-78 lshostvdiskmap IBM_2145:ITSO_SVC1:admin>lshostvdiskmap -delim , Almaden id,name,SCSI_id,vdisk_id,vdisk_name,vdisk_UID 2,Almaden,0,26,volume_B,60050768018301BF2800000000000005 2,Almaden,1,27,volume_A,60050768018301BF2800000000000004

Chapter 9. SAN Volume Controller operations using the command-line interface

509

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

2,Almaden,2,28,volume_C,60050768018301BF2800000000000006

Look for the disk serial number that matches your datapath query device output. This host was defined in our SVC as Almaden. 3. Run the lsvdiskmember vdiskname command for a list of the MDisk or MDisks that make up the specified volume (Example 9-79).
Example 9-79 lsvdiskmember IBM_2145:ITSO_SVC1:admin>lsvdiskmember volume_E id 0 1 2 3 4 10 11 13 15 16 17

4. Query the MDisks with the lsmdisk mdiskID to find their controller and LUN number information, as shown in Example 9-80. The output displays the controller name and the controller LUN ID to help you (provided you gave your controller a unique name, such as a serial number) to track back to a LUN within the disk subsystem; see Example 9-80.
Example 9-80 lsmdisk command IBM_2145:ITSO_SVC1:admin>lsmdisk 0 id 0 name mdisk0 status online mode managed mdisk_grp_id 0 mdisk_grp_name STGPool_DS3500-1 capacity 128.0GB quorum_index 1 block_size 512 controller_name ITSO-DS3500 ctrl_type 4 ctrl_WWNN 20080080E51B09E8 controller_id 2 path_count 4 max_path_count 4 ctrl_LUN_# 0000000000000000 UID 60080e50001b0b62000007b04e731e4d00000000000000000000000000000000 preferred_WWPN 20580080E51B09E8 active_WWPN 20580080E51B09E8 fast_write_state empty raid_status raid_level redundancy strip_size spare_goal spare_protection_min balanced tier generic_hdd

510

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

9.6 Scripting under the CLI for SVC task automation


Command prefix changes: The svctask and svcinfo command prefixes are no longer necessary when issuing a command. If you have existing scripts that use those prefixes, they will continue to function. You do not need to change the scripts. Using scripting constructs works better for the automation of regular operational jobs. You can use available shells to develop scripts. Scripting enhances the productivity of SVC administrators and the integration of their storage virtualization environment. You can create your own customized scripts to automate a large number of tasks for completion at a variety of times and run them through the CLI. We suggest that in large SAN environments where scripting commands is used, to keep the scripting as simple as possible. It is harder to manage fallback, documentation, and verifying a successful script prior to execution in a large SAN environment. In this section we present an overview of how to automate various tasks by creating scripts using the IBM System Storage SAN Volume Controller (SVC) command-line interface (CLI).

9.6.1 Scripting structure


When creating scripts to automate tasks on the SVC, use the structure that is illustrated in Figure 9-2.

Create connection (SSH) to the SVC

Run the command(s) command

Scheduled activation or Manual activation

Perform logging

Figure 9-2 Scripting structure for SVC task automation

Chapter 9. SAN Volume Controller operations using the command-line interface

511

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

Creating a Secure Shell connection to the SVC


Note: Starting with SVC 6.3 using SSH Key is optional, you can use userid and password to access the system, for security reason we suggest the use of SSH Key. Here is a sample how to use it. When creating a connection to the SVC, if you are running the script, you must have access to a public key that corresponds to a public key that has been previously uploaded to the SVC. The key is used to establish the Secure Shell (SSH) connection that is needed to use the CLI on the SVC. If the SSH keypair is generated without a passphrase, you can connect without the need of special scripting to parse in the passphrase. On UNIX systems, you can use the ssh command to create an SSH connection with the SVC. On Windows systems you can use a utility called plink.exe, which is provided with the PuTTY tool, to create an SSH connection with the SVC. In the following examples, we use plink to create the SSH connection to the SVC.

Executing the commands


When using the CLI refer to IBM System Storage SAN Volume Controller Command-Line Interface Users Guide to obtain the correct syntax and a detailed explanation of each command. You can download it from the SVC documentation page for each SVC code level at this website: http://www-947.ibm.com/support/entry/portal/Documentation/Hardware/System_Storage/ Storage_software/Storage_virtualization/SAN_Volume_Controller_%282145%29Performing logging When using the CLI, not all commands provide a response to determine the status of the invoked command. Therefore, always create checks that can be logged for monitoring and troubleshooting purposes.

Connecting to the SVC using a predefined SSH connection


The easiest way to create an SSH connection to the SVC is when plink can call a predefined PuTTY session. Define a session, including this information: The auto-login user name and setting the auto-login user name to your SVC admin user name (for example, admin). This parameter is set under the Connection Data category as shown in Figure 9-3 on page 513.

512

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

Figure 9-3 Auto-login configuration

The private key for authentication (for example, icat.ppk). This key is the private key that you have already created. This parameter is set under the Connection Session Auth category as shown in Figure 9-4.

Figure 9-4 An ssh private key configuration

The IP address of the SVC clustered system. This parameter is set under the Session category as shown in Figure 9-5 on page 514.

Chapter 9. SAN Volume Controller operations using the command-line interface

513

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 9-5 IP address

A session name. Our example uses ITSO_SVC1. Our PuTTy version is 0.60. To use this predefined PuTTY session, use the following syntax: plink ITSO_SVC1 If a predefined PuTTY session is not used, use this syntax: plink admin@<your cluster ip add> -i "C:\DirectoryPath\KeyName.PPK" IBM provides a suite of scripting tools that is based on Perl. You can download these scripting tools from this website: http://www.alphaworks.ibm.com/tech/svctools

514

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

9.7 SVC advanced operations using the CLI


In the following sections we describe the commands that we think best represent advanced operational commands. Important command prefix changes: The svctask and svcinfo command prefixes are no longer necessary when issuing a command. If you have existing scripts that use those prefixes, they will continue to function. You do not need to change the scripts.

9.7.1 Command syntax


Two major command sets are available: The svcinfo command set allows you to query the various components within the SVC environment. The svctask command set allows you to make changes to the various components within the SVC. When the command syntax is shown, you see several parameters in square brackets, for example, [parameter], which indicates that the parameter is optional in most if not all instances. Any parameter that is not in square brackets is required information. You can view the syntax of a command by entering one of the following commands: svcinfo svctask svcinfo svctask svcinfo -? -? commandname -? commandname -? commandname -filtervalue? Shows a complete list of information commands. Shows a complete list of task commands. Shows the syntax of information commands. Shows the syntax of task commands. Shows which filters you can use to reduce the output of the information commands.

Help: You can also use -h instead of -?, for example, svcinfo -h or svctask commandname -h. If you look at the syntax of the command by typing svcinfo command name -?, you often see -filter listed as a parameter. Be aware that the correct parameter is -filtervalue. Tip: You can use the up and down arrow keys on your keyboard to recall commands that were recently issued. Then, you can use the left and right, Backspace, and Delete keys to edit commands before you resubmit them.

9.7.2 Organizing on window content


Sometimes the output of a command can be long and difficult to read in the window. In cases where you need information about a subset of the total number of available items, you can use filtering to reduce the output to a more manageable size.

Filtering
To reduce the output that is displayed by a command, you can specify a number of filters, depending on which command you are running. To see which filters are available, type the command followed by the -filtervalue? flag, as shown in Example 9-81.
Example 9-81 lsvdisk -filtervalue? command IBM_2145:ITSO_SVC1:admin>lsvdisk -filtervalue?

Chapter 9. SAN Volume Controller operations using the command-line interface

515

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

Filters for this view are : name id IO_group_id IO_group_name status mdisk_grp_name mdisk_grp_id capacity type FC_id FC_name RC_id RC_name vdisk_name vdisk_id vdisk_UID fc_map_count copy_count fast_write_state se_copy_count filesystem preferred_node_id mirror_write_priority RC_flash

When you know the filters, you can be more selective in generating output: Multiple filters can be combined to create specific searches. You can use an asterisk (*) as a wildcard when using names. When capacity is used, the units must also be specified using -u b | kb | mb | gb | tb | pb. For example, if we issue the lsvdisk command with no filters but with the -delim parameter, we see the output that is shown in Example 9-82.
Example 9-82 lsvdisk command: No filters IBM_2145:ITSO_SVC1:admin>lsvdisk -delim , id,name,IO_group_id,IO_group_name,status,mdisk_grp_id,mdisk_grp_name,capacity,type,FC_id,FC _name,RC_id,RC_name,vdisk_UID,fc_map_count,copy_count,fast_write_state,se_copy_count,RC_cha nge 0,ESXI_SRV1_VOL01,1,io_grp1,online,many,many,100.00GB,many,,,,,6005076801AF813F100000000000 0014,0,2,empty,0,no 1,volume_7,0,io_grp0,online,0,STGPool_DS3500-1,10.00GB,striped,,,,,6005076801AF813F10000000 0000001F,0,1,empty,1,no 2,W2K3_SRV1_VOL02,1,io_grp1,online,0,STGPool_DS3500-1,10.00GB,striped,,,,,6005076801AF813F1 000000000000003,0,1,empty,0,no 3,W2K3_SRV1_VOL03,1,io_grp1,online,0,STGPool_DS3500-1,10.00GB,striped,,,,,6005076801AF813F1 000000000000004,0,1,empty,0,no 4,W2K3_SRV1_VOL04,1,io_grp1,online,0,STGPool_DS3500-1,10.00GB,striped,,,,,6005076801AF813F1 000000000000005,0,1,empty,0,no 5,W2K3_SRV1_VOL05,1,io_grp1,online,0,STGPool_DS3500-1,10.00GB,striped,,,,,6005076801AF813F1 000000000000006,0,1,empty,0,no 6,W2K3_SRV1_VOL06,1,io_grp1,online,0,STGPool_DS3500-1,10.00GB,striped,,,,,6005076801AF813F1 000000000000007,0,1,empty,0,no 7,W2K3_SRV2_VOL01,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F1 000000000000008,0,1,empty,0,no

516

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

8,W2K3_SRV2_VOL02,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F1 000000000000009,0,1,empty,0,no

Tip: The -delim parameter truncates the content in the window and separates data fields with colons as opposed to wrapping text over multiple lines. This parameter is normally used in cases where you need to get reports during script execution. If we now add a filter to our lsvdisk command (mdisk_grp_name) we can reduce the output, as shown in Example 9-83.
Example 9-83 lsvdisk command: With a filter IBM_2145:ITSO_SVC1:admin>lsvdisk -filtervalue mdisk_grp_name=STGPool_DS3500-2 id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state se_copy_count RC_change 7,W2K3_SRV2_VOL01,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F1 000000000000008,0,1,empty,0,no 8,W2K3_SRV2_VOL02,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F1 000000000000009,0,1,empty,0,no

Chapter 9. SAN Volume Controller operations using the command-line interface

517

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

9.8 Managing the clustered system using the CLI


In these sections we demonstrate how to perform system administration.

9.8.1 Viewing clustered system properties


Note from SVC 6.3: The svcinfo lscluster command is changed to lssystem. The svctask chcluster command is changed to chsystem, and some optional parameters have moved to new commands, for example, to change the IP address of the system you can now use the chsystemip command. All the old commands are maintained for compatibility reasons. Use the lssystem command to display summary information about the clustered system, as shown in Example 9-84.
Example 9-84 lssystem command IBM_2145:ITSO_SVC1:admin>lssystem id 000002006BE04FC4 name ITSO_SVC1 location local partnership bandwidth total_mdisk_capacity 836.5GB space_in_mdisk_grps 786.5GB space_allocated_to_vdisks 434.02GB total_free_space 402.5GB total_vdiskcopy_capacity 442.00GB total_used_capacity 432.00GB total_overallocation 52 total_vdisk_capacity 341.00GB total_allocated_extent_capacity 435.75GB statistics_status on statistics_frequency 15 cluster_locale en_US time_zone 520 US/Pacific code_level 6.3.0.0 (build 54.0.1109090000) console_IP 10.18.228.81:443 id_alias 000002006BE04FC4 gm_link_tolerance 300 gm_inter_cluster_delay_simulation 0 gm_intra_cluster_delay_simulation 0 gm_max_host_delay 5 email_reply email_contact email_contact_primary email_contact_alternate email_contact_location email_contact2 email_contact2_primary email_contact2_alternate email_state stopped inventory_mail_interval 0 cluster_ntp_IP_address 69.50.219.51 cluster_isns_IP_address

518

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

iscsi_auth_method none iscsi_chap_secret auth_service_configured no auth_service_enabled no auth_service_url auth_service_user_name auth_service_pwd_set no auth_service_cert_set no auth_service_type tip relationship_bandwidth_limit 25 tier generic_ssd tier_capacity 0.00MB tier_free_capacity 0.00MB tier generic_hdd tier_capacity 786.50GB tier_free_capacity 352.25GB has_nas_key no layer appliance

Use the lssystemstats command to displays the most recent values of all node statistics across all nodes in a clustered system as shown in Example 9-85.
Example 9-85 lssystemstats command IBM_2145:ITSO_SVC1:admin>lssystemstats stat_name stat_current stat_peak cpu_pc 1 1 fc_mb 0 0 fc_io 7091 7314 sas_mb 0 0 sas_io 0 0 iscsi_mb 0 0 iscsi_io 0 0 write_cache_pc 0 0 total_cache_pc 0 0 vdisk_mb 0 0 vdisk_io 0 0 vdisk_ms 0 0 mdisk_mb 0 0 mdisk_io 0 0 mdisk_ms 0 0 drive_mb 0 0 drive_io 0 0 drive_ms 0 0 vdisk_r_mb 0 0 vdisk_r_io 0 0 vdisk_r_ms 0 0 vdisk_w_mb 0 0 vdisk_w_io 0 0 vdisk_w_ms 0 0 mdisk_r_mb 0 0 mdisk_r_io 0 0 mdisk_r_ms 0 0 mdisk_w_mb 0 0 mdisk_w_io 0 0 mdisk_w_ms 0 0 drive_r_mb 0 0 drive_r_io 0 0 drive_r_ms 0 0 drive_w_mb 0 0 stat_peak_time 110927162859 110927162859 110927162524 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859

Chapter 9. SAN Volume Controller operations using the command-line interface

519

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

drive_w_io drive_w_ms

0 0

0 0

110927162859 110927162859

9.8.2 Changing system settings


Use the chsystem command to change the settings of the system. This command modifies specific features of a clustered system. You can change multiple features by issuing a single command. All command parameters are optional; however, you must specify at least one parameter. Important: Be aware of the following points: Starting with SVC 6.3 the svctask chcluster command is changed to chsystem, and some optional parameters have moved to new commands, for example, to change the IP address of the system you can now use chsystemip command. All the old commands are maintained for scripts compatibility reason. Changing the speed on a running system breaks I/O service to the attached hosts. Before changing the fabric speed, stop I/O from the active hosts and force these hosts to flush any cached data by unmounting volumes (for UNIX host types) or by removing drive letters (for Windows host types). Specific hosts can need to be rebooted to detect the new fabric speed. Example 9-86 shows configuring the NTP IP address.
Example 9-86 chsystem command IBM_2145:ITSO_SVC1:admin>chsystem -ntpip 10.200.80.1

9.8.3 iSCSI configuration


Starting with SVC 5.1, iSCSI is introduced as a supported method of communication between the SVC and hosts. All back-end storage and intracluster communication still uses FC and the SAN, so iSCSI cannot be used for that communication. In 2.6, iSCSI overview on page 30 we describe in detail how iSCSi works. In this section we show how we configured our system for use with iSCSI. We configured our nodes to use the primary and secondary Ethernet ports for iSCSI and to contain the clustered system IP. When we configured our nodes to be used with iSCSI, we did not affect our clustered system IP. The clustered system IP is changed, as shown in 9.8.2, Changing system settings on page 520. It is important to know that we can have more than a one IP address-to-one physical connection relationship. We have the capability to have a four-to-one relationship (4:1), consisting of two IPv4 plus two IPv6 addresses (four total) to one physical connection per port per node. Tip: When reconfiguring IP ports, be aware that already configured iSCSI connections will need to reconnect if changes are made to the IP addresses of the nodes.

520

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

There are two ways to perform iSCSI authentication or CHAP, either for the whole clustered system or per host connection. Example 9-87 shows configuring CHAP for the whole clustered system.
Example 9-87 Setting a CHAP secret for the entire clustered system to passw0rd IBM_2145:ITSO_SVC1:admin>chsystem -iscsiauthmethod chap -chapsecret passw0rd

In our scenario we have a clustered system IP of 9.64.210.64, which is not affected during our configuration of the nodes IP addresses. We start by listing our ports using the lsportip command (not shown). We see that we have two ports per node with which to work. Both ports can have two IP addresses that can be used for iSCSI. We configure the secondary port in both nodes in our I/O Group as shown in Example 9-88.
Example 9-88 Configuring secondary Ethernet port on SVC nodes IBM_2145:ITSO_SVC1:admin>cfgportip -node 1 -ip 9.8.7.1 -gw 9.0.0.1 -mask 255.255.255.0 2 IBM_2145:ITSO_SVC1:admin>cfgportip -node 2 -ip 9.8.7.3 -gw 9.0.0.1 -mask 255.255.255.0 2

While both nodes are online, each node will be available to iSCSI hosts on the IP address that we have configured. Note that iSCSI failover between nodes is enabled automatically. Therefore, if a node goes offline for any reason, its partner node in the I/O group will become available on the failed nodes port IP address. This ensures that hosts will continue to be able to perform I/O. The lsportip command will display which port IP addresses are currently active on each node.

9.8.4 Modifying IP addresses


We can use both IP ports of the nodes. However, the first time that you configure a second port all IP information is required, because port 1 on the system must always have one stack fully configured. There are now two active system ports on the configuration node. If the system IP address is changed, the open command-line shell closes during the processing of the command. You must reconnect to the new IP address if connected through that port. If the clustered system IP address is changed, the open command-line shell closes during the processing of the command and you must reconnect to the new IP address. If this node cannot rejoin the clustered system, you can bring the node up in service mode. In this mode, the node can be accessed as a stand-alone node using the service IP address. We discuss service IP address in more detail in 9.19, Working with the Service Assistant menu on page 626. List the IP addresses of the clustered system by issuing the lssystemip command as shown in Example 9-89.
Example 9-89 lssytemip command IBM_2145:ITSO_SVC1:admin>lssystemip cluster_id cluster_name location port_id IP_address subnet_mask gateway IP_address_6 prefix_6 gateway_6 000002006BE04FC4 ITSO_SVC1 local 1 10.18.228.81 255.255.255.0 10.18.228.1 fd09:5030:beef:cafe:0000:0000:0000:0083 64 fd09:5030:beef:cafe:0000:0000:0000:0001 000002006BE04FC4 ITSO_SVC1 local 2

Chapter 9. SAN Volume Controller operations using the command-line interface

521

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

000002006AC03A42 ITSO_SVC2 remote 1 10.18.228.82 255.255.255.0 10.18.228.1 000002006AC03A42 ITSO_SVC2 remote 2 0000020060A06FB8 ITSO_SVC3 remote 1 10.18.228.83 255.255.255.0 10.18.228.83 fdee:beeb:beeb:0000:0000:0000:0000:0083 48 fdee:beeb:beeb:0000:0000:0000:0000:0083 0000020060A06FB8 ITSO_SVC3 remote 2

Modify the IP address by issuing the chsystemip command. You can either specify a static IP address or have the system assign a dynamic IP address, as shown in Example 9-90.
Example 9-90 chsystemip -systemip

IBM_2145:ITSO_SVC1:admin>chsystemip -systemip 10.20.133.5 -gw 10.20.135.1 -mask 255.255.255.0 -port 1 This command changes the current IP address of the clustered system to 10.20.133.5. Important: If you specify a new system IP address, then the existing communication with the system through the CLI is broken and the PuTTY application automatically closes. You must relaunch the PuTTY application and point to the new IP address, but your SSH key will still work. List the IP service addresses of the clustered system by issuing the lsserviceip command as shown in Example 9-89 on page 521.

9.8.5 Supported IP address formats


Table 9-1 lists the IP address formats.
Table 9-1 ip_address_list formats IP type IPv4 (no port set, SVC uses default) IPv4 with specific port Full IPv6, default port Full IPv6, default port, leading zeros suppressed Full IPv6 with port Zero-compressed IPv6, default port Zero-compressed IPv6 with port ip_address_list format 1.2.3.4 1.2.3.4:22 1234:1234:0001:0123:1234:1234:1234:1234 1234:1234:1:123:1234:1234:1234:1234 [2002:914:fc12:848:209:6bff:fe8c:4ff6]:23 2002::4ff6 [2002::4ff6]:23

At this point, we have completed the tasks that are required to change the IP addresses of the clustered system.

9.8.6 Setting the clustered system time zone and time


Use the -timezone parameter to specify the numeric ID of the time zone that you want to set. Issue the lstimezones command to list the time zones that are available on the system; this command displays a list of valid time zone settings.

522

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

Tip: If you have changed the time zone, you must clear the event log dump directory before you can view the event log through the web application.

Setting the clustered system time zone


Perform the following steps to set the clustered system time zone and time: 1. Find out for which time zone your system is currently configured. Enter the showtimezone command, as shown in Example 9-91.
Example 9-91 showtimezone command IBM_2145:ITSO_SVC1:admin>showtimezone id timezone 522 UTC

2. To find the time zone code that is associated with your time zone, enter the lstimezones command, as shown in Example 9-92. A truncated list is provided for this example. If this setting is correct (for example, 522 UTC), go to Step 4. If not, continue with Step 3.
Example 9-92 lstimezones command IBM_2145:ITSO_SVC1:admin>lstimezones id timezone . . 507 Turkey 508 UCT 509 Universal 510 US/Alaska 511 US/Aleutian 512 US/Arizona 513 US/Central 514 US/Eastern 515 US/East-Indiana 516 US/Hawaii 517 US/Indiana-Starke 518 US/Michigan 519 US/Mountain 520 US/Pacific 521 US/Samoa 522 UTC . .

3. Now that you know which time zone code is correct for you, set the time zone by issuing the settimezone (Example 9-93) command.
Example 9-93 settimezone command IBM_2145:ITSO_SVC1:admin>settimezone -timezone 520

4. Set the system time by issuing the setclustertime command (Example 9-94).
Example 9-94 setclustertime command IBM_2145:ITSO_SVC1:admin>setclustertime -time 061718402008

The format of the time is MMDDHHmmYYYY. You have completed the necessary tasks to set the clustered system time zone and time.
Chapter 9. SAN Volume Controller operations using the command-line interface

523

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

9.8.7 Starting statistics collection


Statistics are collected at the end of each sampling period (as specified by the -interval parameter). These statistics are written to a file. A new file is created at the end of each sampling period. Separate files are created for MDisks, volumes, and node statistics. Use the startstats command to start the collection of statistics, as shown in Example 9-95.
Example 9-95 startstats command IBM_2145:ITSO_SVC1:admin>startstats -interval 15

The interval that we specify (minimum 1, maximum 60) is in minutes. This command starts statistics collection and gathers data at 15-minute intervals. Statistics collection: To verify that statistics collection is set, display the system properties again, as shown in Example 9-96.
Example 9-96 Statistics collection status and frequency

IBM_2145:ITSO_SVC1:admin>lssystem statistics_status on statistics_frequency 15 -- Note that the output has been shortened for easier reading. -Note: Starting with SVC 6.3 the command svctask stopstats has been removed, you cannot disable statistics collection. At this point, we have completed the required tasks to start statistics collection on the clustered system.

9.8.8 Determining the status of a copy operation


Use the lscopystatus command, as shown in Example 9-97, to determine if a file copy operation is in progress. Only one file copy operation can be performed at a time. The output of this command is a status of active or inactive.
Example 9-97 lscopystatus command IBM_2145:ITSO_SVC1:admin>lscopystatus status inactive

9.8.9 Shutting down a clustered system


If all input power to an SVC system is to be removed for more than a few minutes (for example, if the machine room power is to be shut down for maintenance), it is important to shut down the clustered system before removing the power. If the input power is removed from the uninterruptible power supply units without first shutting down the system and the uninterruptible power supply units, the uninterruptible power supply units remain operational and eventually become drained of power. When input power is restored to the uninterruptible power supply units, they start to recharge. However, the SVC does not permit any I/O activity to be performed to the volumes until the uninterruptible power supply units are charged enough to enable all of the data on the SVC 524
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

nodes to be destaged in the event of a subsequent unexpected power loss. Recharging the uninterruptible power supply can take as long as two hours. Shutting down the clustered system prior to removing input power to the uninterruptible power supply units prevents the battery power from being drained. It also makes it possible for I/O activity to be resumed as soon as input power is restored. You can use the following procedure to shut down the system: 1. Use the stopsystem command to shut down your SVC system (Example 9-98).
Example 9-98 stopsystem command IBM_2145:ITSO_SVC1:admin>stopsystem Are you sure that you want to continue with the shut down?

This command shuts down the SVC clustered system. All data is flushed to disk before the power is removed. At this point, you lose administrative contact with your system, and the PuTTY application automatically closes. 2. You will be presented with the following message: Warning: Are you sure that you want to continue with the shut down? Ensure that you have stopped all FlashCopy mappings, Metro Mirror (Remote Copy) relationships, data migration operations, and forced deletions before continuing. Entering y to this message will execute the command. No feedback is then displayed. Entering anything other than y(es) or Y(ES) will result in the command not executing. No feedback is displayed. Important: Before shutting down a clustered system, ensure that all I/O operations are stopped that are destined for this system, because you will lose all access to all volumes being provided by this system. Failure to do so can result in failed I/O operations being reported to the host operating systems. Begin the process of quiescing all I/O to the system by stopping the applications on the hosts that are using the volumes provided by the clustered system.

3. We have completed the tasks that are required to shut down the system. To shut down the uninterruptible power supply units, press the power on button on the front panel of each uninterruptible power supply unit. Restarting the system: To restart the clustered system, you must first restart the uninterruptible power supply units by pressing the power button on their front panels. Then press the power on button on the service panel of one of the nodes within the system. After the node is fully booted up (for example, displaying Cluster: on line 1 and the cluster name on line 2 of the panel), you can start the other nodes in the same way. As soon as all of the nodes are fully booted, you can reestablish administrative contact using PuTTY, and your system will be fully operational again.

Chapter 9. SAN Volume Controller operations using the command-line interface

525

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

9.9 Nodes
This section details the tasks that can be performed at an individual node level.

9.9.1 Viewing node details


Use the lsnode command to view the summary information about the nodes that are defined within the SVC environment. To view more details about a specific node, append the node name (for example, SVC1N1) to the command. Example 9-99 shows both of these commands. Tip: The -delim parameter truncates the content in the window and separates data fields with colons (:) as opposed to wrapping text over multiple lines.
Example 9-99 lsnode command IBM_2145:ITSO_SVC1:admin>lsnode -delim , id,name,UPS_serial_number,WWNN,status,IO_group_id,IO_group_name,config_node,UPS_unique_id,h ardware,iscsi_name,iscsi_alias,panel_name,enclosure_id,canister_id,enclosure_serial_number 1,SVC1N1,1000739004,50050768010027E2,online,0,io_grp0,no,10000000000027E2,8G4,iqn.1986-03.c om.ibm:2145.itsosvc1.svc1n1,,108283,,, 2,SVC1N2,1000739005,5005076801005034,online,0,io_grp0,yes,1000000000005034,8G4,iqn.1986-03. com.ibm:2145.itsosvc1.svc1n2,,110711,,, 3,SVC1N4,1000739006,500507680100505C,online,1,io_grp1,no,20400001C3240004,8G4,iqn.1986-03.c om.ibm:2145.itsosvc1.svc1n4,,110775,,, 4,SVC1N3,1000739007,50050768010037E5,online,1,io_grp1,no,10000000000037E5,8G4,iqn.1986-03.c om.ibm:2145.itsosvc1.svc1n3,,104643,,, IBM_2145:ITSO_SVC1:admin>lsnode SVC1N1 id 1 name SVC1N1 UPS_serial_number 1000739004 WWNN 50050768010027E2 status online IO_group_id 0 IO_group_name io_grp0 partner_node_id 2 partner_node_name SVC1N2 config_node no UPS_unique_id 10000000000027E2 port_id 50050768014027E2 port_status active port_speed 2Gb port_id 50050768013027E2 port_status active port_speed 2Gb port_id 50050768011027E2 port_status active port_speed 2Gb port_id 50050768012027E2 port_status active port_speed 2Gb hardware 8G4 iscsi_name iqn.1986-03.com.ibm:2145.itsosvc1.svc1n1 iscsi_alias failover_active no failover_name SVC1N2 failover_iscsi_name iqn.1986-03.com.ibm:2145.itsosvc1.svc1n2

526

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

failover_iscsi_alias panel_name 108283 enclosure_id canister_id enclosure_serial_number service_IP_address 10.18.228.101 service_gateway 10.18.228.1 service_subnet_mask 255.255.255.0 service_IP_address_6 service_gateway_6 service_prefix_6

9.9.2 Adding a node


After clustered system creation is completed through the service panel (the front panel of one of the SVC nodes) and system web interface, only one node (the configuration node) is set up. To have a fully functional SVC system, you must add a second node to the configuration. To add a node to a clustered system, gather the necessary information, as explained in these steps: Before you can add a node, you must know which unconfigured nodes you have as candidates. Issue the lsnodecandidate command (Example 9-100). You must specify to which I/O Group you are adding the node. If you enter the lsnode command, you can easily identify the I/O Group ID of the group to which you are adding your node, as shown in Example 9-100.
Example 9-100 lsnodecandidate command IBM_2145:ITSO_SVC1:admin>lsnodecandidate id panel_name UPS_serial_number UPS_unique_id 50050768010037E5 104643 1000739007 10000000000037E5 8G4 hardware

Tip: The node that you want to add must have a separate uninterruptible power supply unit serial number from the uninterruptible power supply unit on the first node.
Example 9-101 lsnode command IBM_2145:ITSO_SVC1:admin>lsnode -delim , id,name,UPS_serial_number,WWNN,status,IO_group_id,IO_group_name,config_node,UPS_unique_id,h ardware,iscsi_name,iscsi_alias,panel_name,enclosure_id,canister_id,enclosure_serial_number 4,SVC1N3,1000739007,50050768010037E5,online,1,io_grp1,no,10000000000037E5,8G4,iqn.1986-03.c om.ibm:2145.itsosvc1.svc1n3,,104643,,,

Now that we know the available nodes, we can use the addnode command to add the node to the SVC clustered system configuration. Example 9-102 shows the command to add a node to the SVC system.
Example 9-102 addnode (wwnodename) command IBM_2145:ITSO_SVC1:admin>addnode -wwnodename 50050768010037E5 -iogrp io_grp1

Chapter 9. SAN Volume Controller operations using the command-line interface

527

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

Node, id [5], successfully added

This command adds the candidate node with the wwnodename of 50050768010027E2 to the I/O Group called io_grp0. We used the -wwnodename parameter (50050768010037E5). However, we can also use the -panelname parameter (104643) instead, as shown in Example 9-103. If standing in front of the node, it is easier to read the panel name than it is to get the WWNN.
Example 9-103 addnode (panelname) command IBM_2145:ITSO_SVC1:admin>addnode -panelname 104643 -name SVC1N3 -iogrp io_grp1

We also used the optional -name parameter (SVC1N3). If you do not provide the -name parameter, the SVC automatically generates the name nodex (where x is the ID sequence number that is assigned internally by the SVC). Name: If you want to provide a name, you can use letters A to Z and a to z, numbers 0 to 9, the dash (-), and the underscore (_). The name can be between one and 63 characters in length. However, the name cannot start with a number, dash, or the word node (because this prefix is reserved for SVC assignment only). If the addnode command returns no information, your second node is powered on, and the zones are correctly defined, then preexisting system configuration data can be stored in the node. If you are sure that this node is not part of another active SVC system, you can use the service panel to delete the existing system information. After this action is complete, reissue the lsnodecandidate command and you will see it listed.

9.9.3 Renaming a node


Use the chnode command to rename a node within the SVC system configuration as shown in Example 9-104.
Example 9-104 chnode -name command IBM_2145:ITSO_SVC1:admin>chnode -name ITSO_SVC1_SVC1N3 4

This command renames node ID 4 to ITSO_SVC1_SVC1N3 4. Name: The chnode command specifies the new name first. You can use letters A to Z and a to z, numbers 0 to 9, the dash (-), and the underscore (_). The name can be between one and 63 characters in length. However, the name cannot start with a number, dash, or the word node (because this prefix is reserved for SVC assignment only).

9.9.4 Deleting a node


Use the rmnode command to remove a node from the SVC clustered system configuration (Example 9-105).
Example 9-105 rmnode command IBM_2145:ITSO_SVC1:admin>rmnode SVC1N2

This command removes SVC1N2 from the SVC clustered system.

528

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

Because SVC1N2 was also the configuration node, the SVC transfers the configuration node responsibilities to a surviving node, within the I/O Group. Unfortunately, the PuTTY session cannot be dynamically passed to the surviving node. Therefore, the PuTTY application loses communication and closes automatically. We must restart the PuTTY application to establish a secure session with the new configuration node. Important: If this node is the last node in an I/O Group, and there are volumes still assigned to the I/O Group, the node is not deleted from the clustered system. If this node is the last node in the system, and the I/O Group has no volumes remaining, the clustered system is destroyed and all virtualization information is lost. Any data that is still required must be backed up or migrated prior to destroying the system.

9.9.5 Shutting down a node


On occasion, it can be necessary to shut down a single node within the clustered system to perform tasks, such as scheduled maintenance, while leaving the SVC environment up and running. Use the stopcluster -node command, as shown in Example 9-106, to shut down a single node.
Example 9-106 stopcluster -node command

IBM_2145:ITSO_SVC1:admin>stopcluster -node SVC1N3 Are you sure that you want to continue with the shut down? This command shuts down node SVC1N3 in a graceful manner. When this node has been shut down, the other node in the I/O Group will destage the contents of its cache and will go into write-through mode until the node is powered up and rejoins the clustered system. Important: There is no need to stop FlashCopy mappings, Remote Copy relationships, and data migration operations. The other node will handle these activities, but be aware that the system has a single point of failure now. If this is the last node in an I/O Group, all access to the volumes in the I/O Group will be lost. Verify that you want to shut down this node before executing this command. You must specify the -force flag. By reissuing the lsnode command (Example 9-107), we can see that the node is now offline.
Example 9-107 lsnode command IBM_2145:ITSO_SVC1:admin>lsnode -delim , id,name,UPS_serial_number,WWNN,status,IO_group_id,IO_group_name,config_node,UPS_unique_id,h ardware,iscsi_name,iscsi_alias,panel_name,enclosure_id,canister_id,enclosure_serial_number 1,SVC1N1,1000739004,50050768010027E2,online,0,io_grp0,no,10000000000027E2,8G4,iqn.1986-03.c om.ibm:2145.itsosvc1.svc1n1,,108283,,, 2,SVC1N2,1000739005,5005076801005034,online,0,io_grp0,yes,1000000000005034,8G4,iqn.1986-03. com.ibm:2145.itsosvc1.svc1n2,,110711,,, 3,SVC1N4,1000739006,500507680100505C,online,1,io_grp1,no,20400001C3240004,8G4,iqn.1986-03.c om.ibm:2145.itsosvc1.svc1n4,,110775,,, 4,SVC1N3,1000739007,50050768010037E5,offline,1,io_grp1,no,10000000000037E5,8G4,iqn.1986-03. com.ibm:2145.itsosvc1.svc1n3,,104643,,,

Chapter 9. SAN Volume Controller operations using the command-line interface

529

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

IBM_2145:ITSO_SVC1:admin>lsnode SVC1N3 CMMVC5782E The object specified is offline. Restart: To restart the node manually, press the power on button from the service panel of the node. At this point we have completed the tasks that are required to view, add, delete, rename, and shut down a node within an SVC environment.

530

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

9.10 I/O Groups


This section explains the tasks that you can perform at an I/O Group level.

9.10.1 Viewing I/O Group details


Use the lsiogrp command, as shown in Example 9-108, to view information about the I/O Groups that are defined within the SVC environment.
Example 9-108 I/O Group details

IBM_2145:ITSO_SVC1:admin>lsiogrp id name node_count vdisk_count host_count 0 io_grp0 2 24 9 1 io_grp1 2 22 9 2 io_grp2 0 0 1 3 io_grp3 0 0 1 4 recovery_io_grp 0 0 0 As shown, the SVC predefines five I/O Groups. In a four-node clustered system (similar to our example), only two I/O Groups are actually in use. The other I/O Groups (io_grp2 and io_grp3) are for a six- or eight-node clustered system. The recovery I/O Group is a temporary home for volumes when all nodes in the I/O Group that normally owns them have suffered multiple failures. This design allows us to move the volumes to the recovery I/O Group and then into a working I/O Group. Note that while temporarily assigned to the recovery I/O Group, I/O access is not possible.

9.10.2 Renaming an I/O Group


Use the chiogrp command to rename an I/O Group (Example 9-109).
Example 9-109 chiogrp command

IBM_2145:ITSO_SVC1:admin>chiogrp -name io_grpA io_grp1 This command renames the I/O Group io_grp1 to io_grpA. Name: The chiogrp command specifies the new name first. If you want to provide a name, you can use letters A to Z, letters a to z, numbers 0 to 9, the dash (-), and the underscore (_). The name can be between one and 63 characters in length. However, the name cannot start with a number, dash, or the word iogrp (because this prefix is reserved for SVC assignment only). To see whether the renaming was successful, issue the lsiogrp command again to see the change. At this point we have completed the tasks that are required to rename an I/O Group.

9.10.3 Adding and removing hostiogrp


To map or unmap a specific host object to a specific I/O Group to reach the maximum number of hosts supported by an SVC clustered system, use the addhostiogrp command to map a specific host to a specific I/O Group, as shown in Example 9-110 on page 532.

Chapter 9. SAN Volume Controller operations using the command-line interface

531

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

Example 9-110 addhostiogrp command

IBM_2145:ITSO_SVC1:admin>addhostiogrp -iogrp 1 Kanaga Parameters: -iogrp iogrp_list -iogrpall Specify a list of one or more I/O Groups that must be mapped to the host. This parameter is mutually exclusive with the -iogrpall option. The -iogrpall option specifies that all the I/O Groups must be mapped to the specified host. This parameter is mutually exclusive with -iogrp. -host host_id_or_name Identify the host either by ID or name to which the I/O Groups must be mapped. Use the rmhostiogrp command to unmap a specific host to a specific I/O Group, as shown in Example 9-111.
Example 9-111 rmhostiogrp command

IBM_2145:ITSO_SVC1:admin>rmhostiogrp -iogrp 0 Kanaga Parameters: -iogrp iogrp_list -iogrpall Specify a list of one or more I/O Groups that must be unmapped to the host. This parameter is mutually exclusive with the -iogrpall option. The -iogrpall option specifies that all of the I/O Groups must be unmapped to the specified host. This parameter is mutually exclusive with -iogrp. -force If the removal of a host to I/O Group mapping will result in the loss of volume to host mappings, the command fails if the -force flag is not used. The -force flag, however, overrides this behavior and forces the deletion of the host to I/O Group mapping. host_id_or_name Identify the host either by the ID or name to which the I/O Groups must be mapped.

9.10.4 Listing I/O Groups


To list all of the I/O Groups that are mapped to the specified host and vice versa, use the lshostiogrp command, specifying the host name Kanaga, as shown in Example 9-112.
Example 9-112 lshostiogrp command

IBM_2145:ITSO_SVC1:admin>lshostiogrp Kanaga id name 1 io_grp1 To list all of the host objects that are mapped to the specified I/O Group, use the lsiogrphost command, as shown in Example 9-113 on page 533.

532

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

Example 9-113 lsiogrphost command

IBM_2145:ITSO_SVC1:admin> lsiogrphost io_grp1 id name 1 Nile 2 Kanaga 3 Siam In Example 9-113, io_grp1 is the I/O Group name.

Chapter 9. SAN Volume Controller operations using the command-line interface

533

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

9.11 Managing authentication


In the following sections we illustrate authentication administration.

9.11.1 Managing users using the CLI


Here we demonstrate how to operate and manage authentication by using the CLI. All users must now be a member of a predefined user group. You can list those groups by using the lsusergrp command, as shown in Example 9-114.
Example 9-114 lsusergrp command IBM_2145:ITSO_SVC1:admin>lsusergrp id name role remote 0 SecurityAdmin SecurityAdmin no 1 Administrator Administrator no 2 CopyOperator CopyOperator no 3 Service Service no 4 Monitor Monitor no

Example 9-115 is a simple example of creating a user. User John is added to the user group Monitor with the password m0nitor.
Example 9-115 mkuser called John with password m0nitor IBM_2145:ITSO_SVC1:admin>mkuser -name John -usergrp Monitor -password m0nitor User, id [6], successfully created

Local users are users that are not authenticated by a remote authentication server. Remote
users are users that are authenticated by a remote central registry server. The user groups already have a defined authority role, as listed in Table 9-2.
Table 9-2 Authority roles User group Security admin Administrator Role All commands All commands except: chauthservice, mkuser, rmuser, chuser, mkusergrp, rmusergrp, chusergrp, and setpwdreset User Superusers Administrators that control the SVC

534

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

User group Copy operator

Role All display commands and the following commands: prestartfcconsistgrp, startfcconsistgrp, stopfcconsistgrp, chfcconsistgrp, prestartfcmap, startfcmap, stopfcmap, chfcmap, startrcconsistgrp, stoprcconsistgrp, switchrcconsistgrp, chrcconsistgrp, startrcrelationship, stoprcrelationship, switchrcrelationship, chrcrelationship, and chpartnership All display commands and the following commands: applysoftware, setlocale, addnode, rmnode, cherrstate, writesernum, detectmdisk, includemdisk, clearerrlog, cleardumps, settimezone, stopcluster, startstats, stopstats, and settime All display commands and the following commands: finderr, dumperrlog, dumpinternallog, and chcurrentuser And svcconfig: backup

User For users that control all of the copy functionality of the cluster

Service

For users that perform service maintenance and other hardware tasks on the system

Monitor

For users only needing view access

9.11.2 Managing user roles and groups


Role-based security commands are used to restrict the administrative abilities of a user. We cannot create new user roles, but we can create new user groups and assign a predefined role to our group. From SVC 6.3 you can connect to the clustered system using the same user name with which you log into a SAN Volume Controller GUI. To view the user roles on your system, use the lsusergrp command, as shown in Example 9-116, to list all users.
Example 9-116 lsusergrp command IBM_2145:ITSO_SVC1:admin>lsusergrp id name role remote 0 SecurityAdmin SecurityAdmin no 1 Administrator Administrator no 2 CopyOperator CopyOperator no 3 Service Service no 4 Monitor Monitor no

Chapter 9. SAN Volume Controller operations using the command-line interface

535

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

To view our currently defined users and the user groups to which they belong we use the lsuser command, as shown in Example 9-117.
Example 9-117 lsuser command IBM_2145:ITSO_SVC1:admin>lsuser -delim , id,name,password,ssh_key,remote,usergrp_id,usergrp_name 0,superuser,yes,no,no,0,SecurityAdmin 1,admin,yes,yes,no,0,SecurityAdmin 2,Torben,yes,no,no,0,SecurityAdmin 3,Massimo,yes,no,no,1,Administrator 4,Christian,yes,no,no,1,Administrator 5,Alejandro,yes,no,no,1,Administrator 6,John,yes,no,no,4,Monitor

9.11.3 Changing a user


To change user passwords, issue the chuser command. The chuser command allows you to modify a user that is already created. You can rename, assign a new password (if you are logged on with administrative privileges), and move a user from one user group to another user group. Be aware, however, that a member can only be a member of one group at a time.

9.11.4 Audit log command


The audit log can be extremely helpful in showing which commands have been entered on a system. Most action commands that are issued by the old or new CLI are recorded in the audit log: The native GUI performs actions by using the CLI programs. The SVC Console performs actions by issuing Common Information Model (CIM) commands to the CIM object manager (CIMOM), which then runs the CLI programs. Actions performed by using both the native GUI and the SVC Console are recorded in the audit log. Certain commands are not audited: dumpconfig cpdumps cleardumps finderr dumperrlog dumpinternallog svcservicetask dumperrlog svcservicetask finderror The audit log contains approximately 1 MB of data, which can contain about 6000 average length commands. When this log is full, the system copies it to a new file in the /dumps/audit directory on the config node and resets the in-memory audit log. To display entries from the audit log, use the catauditlog -first 5 command to return a list of five in-memory audit log entries, as shown in Example 9-118 on page 537.

536

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

Example 9-118 catauditlog command IBM_2145:ITSO_SVC1:admin>catauditlog -first 5 audit_seq_no timestamp cluster_user ssh_ip_address result res_obj_id action_cmd 459 110928150506 admin 10.18.228.173 0 6 svctask mkuser -name John -usergrp Monitor -password '######' 460 110928160353 admin 10.18.228.173 0 7 svctask mkmdiskgrp -name DS5000-2 -ext 256 461 110928160535 admin 10.18.228.173 0 1 svctask mkhost -name hostone -hbawwpn 210100E08B251DD4 -force -mask 1001 462 110928160755 admin 10.18.228.173 0 1 svctask mkvdisk -iogrp 0 -mdiskgrp 3 -size 10 -unit gb -vtype striped -autoexpand -grainsize 32 -rsize 20% 463 110928160817 admin 10.18.228.173 0 svctask rmvdisk 1

If you need to dump the contents of the in-memory audit log to a file on the current configuration node, use the dumpauditlog command. This command does not provide any feedback; it only provides the prompt. To obtain a list of the audit log dumps, use the lsdumps command as shown in Example 9-119.
Example 9-119 lsdumps command IBM_2145:ITSO_SVC1:admin>lsdumps id filename 0 dump.110711.110914.182844 1 svc.config.cron.bak_108283 2 sel.110711.trc 3 endd.trc 4 rtc.race_mq_log.txt.110711.trc 5 dump.110711.110920.102530 6 ethernet.110711.trc 7 svc.config.cron.bak_110711 8 svc.config.cron.xml_110711 9 svc.config.cron.log_110711 10 svc.config.cron.sh_110711 11 110711.trc

Chapter 9. SAN Volume Controller operations using the command-line interface

537

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

9.12 Managing Copy Services


In the following sections we illustrate how to manage Copy Services.

9.12.1 FlashCopy operations


In this section we use a scenario to illustrate how to use commands with PuTTY to perform FlashCopy. See IBM System Storage Open Software Family SAN Volume Controller: Command-Line Interface Users Guide, GC27-2287, for information about other commands.

Scenario description
We use the following scenario in both the command-line section and the GUI section. In the following scenario, we want to FlashCopy the following volumes: DB_Source Log_Source App_Source Database files Database log files Application files

We create Consistency Groups to handle the FlashCopy of DB_Source and Log_Source, because data integrity must be kept on DB_Source and Log_Source. In our scenario, the application files are independent of the database, so we create a single FlashCopy mapping for App_Source. We will make two FlashCopy targets for DB_Source and Log_Source and therefore, two Consistency Groups. Figure 9-6 shows the scenario.

Figure 9-6 FlashCopy scenario

538

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

9.12.2 Setting up FlashCopy


We have already created the source and target volumes and the source and target volumes are identical in size, which is a requirement of the FlashCopy function: DB_Source, DB_Target1, and DB_Target2 Log_Source, Log_Target1, and Log_Target2 App_Source and App_Target1 To set up the FlashCopy, we perform the following steps. 1. Create two FlashCopy Consistency Groups: FCCG1 FCCG2 2. Create FlashCopy mappings for Source volumes: DB_Source FlashCopy to DB_Target1, the mapping name is DB_Map1 DB_Source FlashCopy to DB_Target2, the mapping name is DB_Map2 Log_Source FlashCopy to Log_Target1, the mapping name is Log_Map1 Log_Source FlashCopy to Log_Target2, the mapping name is Log_Map2 App_Source FlashCopy to App_Target1, the mapping name is App_Map1 Copyrate 50

9.12.3 Creating a FlashCopy Consistency Group


To create a FlashCopy Consistency Group, we use the command mkfcconsistgrp to create a new Consistency Group. The ID of the new group is returned. If you have created several FlashCopy mappings for a group of volumes that contain elements of data for the same application, it might be convenient to assign these mappings to a single FlashCopy Consistency Group. Then you can issue a single prepare or start command for the whole group so that, for example, all files for a particular database are copied at the same time. In Example 9-120, the FCCG1 and FCCG2 Consistency Groups are created to hold the FlashCopy maps of DB and Log. This step is extremely important for FlashCopy on database applications because it helps to maintain data integrity during FlashCopy.
Example 9-120 Creating two FlashCopy Consistency Groups

IBM_2145:ITSO_SVC3:admin>mkfcconsistgrp -name FCCG1 FlashCopy Consistency Group, id [1], successfully created IBM_2145:ITSO_SVC3:admin>mkfcconsistgrp -name FCCG2 FlashCopy Consistency Group, id [2], successfully created In Example 9-121, we checked the status of Consistency Groups. Each Consistency Group has a status of empty.
Example 9-121 Checking the status

IBM_2145:ITSO_SVC3:admin>lsfcconsistgrp id name status 1 FCCG1 empty 2 FCCG2 empty If you want to change the name of a Consistency Group, you can use the chfcconsistgrp command. Type chfcconsistgrp -h for help with this command.

Chapter 9. SAN Volume Controller operations using the command-line interface

539

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

9.12.4 Creating a FlashCopy mapping


To create a FlashCopy mapping, we use the mkfcmap command. This command creates a new FlashCopy mapping, which maps a source volume to a target volume to prepare for subsequent copying. When executed, this command creates a new FlashCopy mapping logical object. This mapping persists until it is deleted. The mapping specifies the source and destination volumes. The destination must be identical in size to the source or the mapping will fail. Issue the lsvdisk -bytes command to find the exact size of the source volume for which you want to create a target disk of the same size. In a single mapping, source and destination cannot be on the same volume. A mapping is triggered at the point in time when the copy is required. The mapping can optionally be given a name and assigned to a Consistency Group. These groups of mappings can be triggered at the same time, enabling multiple volumes to be copied at the same time, which creates a consistent copy of multiple disks. A consistent copy of multiple disks is required for database products in which the database and log files reside on separate disks. If no Consistency Group is defined, the mapping is assigned to the default group 0, which is a special group that cannot be started as a whole. Mappings in this group can only be started on an individual basis. The background copy rate specifies the priority that must be given to completing the copy. If 0 is specified, the copy will not proceed in the background. The default is 50. Tip: There is a parameter to delete FlashCopy mappings automatically after completion of a background copy (when the mapping gets to the idle_or_copied state). Use the command: mkfcmap -autodelete This command does not delete mappings in cascade with dependent mappings, because it cannot get to the idle_or_copied state in this situation. In Example 9-122, the first FlashCopy mapping for DB_Source and Log_Source is created.
Example 9-122 Create the first FlashCopy mapping for DB_Source, Log_Source, and App_Source

IBM_2145:ITSO_SVC3:admin>mkfcmap -source DB_Source -target DB_Target1 -name DB_Map1 -consistgrp FCCG1 FlashCopy Mapping, id [0], successfully created IBM_2145:ITSO_SVC3:admin>mkfcmap -source Log_Source -target Log_Target1 -name Log_Map1 -consistgrp FCCG1 FlashCopy Mapping, id [1], successfully created IBM_2145:ITSO_SVC3:admin>mkfcmap -source App_Source -target App_Target1 -name App_Map1 FlashCopy Mapping, id [2], successfully created Example 9-123 on page 541 shows the command to create a second FlashCopy mapping for volume DB_Source and Log_Source.

540

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

Example 9-123 Create additional FlashCopy mappings

IBM_2145:ITSO_SVC3:admin>mkfcmap -source DB_Source -target DB_Target2 -name DB_Map2 -consistgrp FCCG2 FlashCopy Mapping, id [3], successfully created IBM_2145:ITSO_SVC3:admin>mkfcmap -source Log_Source -target Log_Target2 -name Log_Map2 -consistgrp FCCG2 FlashCopy Mapping, id [4], successfully created Example 9-124 shows the result of these FlashCopy mappings. The status of the mapping is idle_or_copied.
Example 9-124 Check the result of Multiple Target FlashCopy mappings

IBM_2145:ITSO_SVC3:admin>lsfcmap id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id group_name status progress copy_rate clean_progress incremental partner_FC_id partner_FC_name restoring start_time rc_controlled 0 DB_Map1 3 DB_Source 4 DB_Target1 1 FCCG1 idle_or_copied 0 50 100 off no no 1 Log_Map1 6 Log_Source 7 Log_Target1 1 FCCG1 idle_or_copied 0 50 100 off no no 2 App_Map1 9 App_Source 10 App_Target1 idle_or_copied 0 50 100 off no no 3 DB_Map2 3 DB_Source 5 DB_Target2 2 FCCG2 idle_or_copied 0 50 100 off no no 4 Log_Map2 6 Log_Source 8 Log_Target2 2 FCCG2 idle_or_copied 0 50 100 off no no IBM_2145:ITSO_SVC3:admin>lsfcconsistgrp id name status 1 FCCG1 idle_or_copied 2 FCCG2 idle_or_copied If you want to change the FlashCopy mapping, you can use the chfcmap command. Type chfcmap -h to get help with this command.

9.12.5 Preparing (pre-triggering) the FlashCopy mapping


At this point the mapping has been created, but the cache still accepts data for the source volumes. You can only trigger the mapping when the cache does not contain any data for FlashCopy source volumes. You must issue an prestartfcmap command to prepare a FlashCopy mapping to start. This command tells the SVC to flush the cache of any content for the source volume and to pass through any further write data for this volume. When the prestartfcmap command is executed, the mapping enters the Preparing state. After the preparation is complete, it changes to the Prepared state. At this point, the mapping is ready for triggering. Preparing and the subsequent triggering are usually performed on a Consistency Group basis. Only mappings belonging to Consistency Group 0 can be prepared on their own, because Consistency Group 0 is a special group that contains the FlashCopy

Chapter 9. SAN Volume Controller operations using the command-line interface

541

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

mappings that do not belong to any Consistency Group. A FlashCopy must be prepared before it can be triggered. In our scenario, App_Map1 is not in a Consistency Group. In Example 9-125, we show how to initialize the preparation for App_Map1. Another option is that you add the -prep parameter to the startfcmap command, which first prepares the mapping and then starts the FlashCopy. In the example, we also show how to check the status of the current FlashCopy mapping. App_Map1s status is prepared.
Example 9-125 Prepare a FlashCopy without a Consistency Group

IBM_2145:ITSO_SVC3:admin>prestartfcmap App_Map1 IBM_2145:ITSO_SVC3:admin>lsfcmap App_Map1 id 2 name App_Map1 source_vdisk_id 9 source_vdisk_name App_Source target_vdisk_id 10 target_vdisk_name App_Target1 group_id group_name status prepared progress 0 copy_rate 50 start_time dependent_mappings 0 autodelete off clean_progress 0 clean_rate 50 incremental off difference 0 grain_size 256 IO_group_id 0 IO_group_name io_grp0 partner_FC_id partner_FC_name restoring no rc_controlled no

9.12.6 Preparing (pre-triggering) the FlashCopy Consistency Group


We use the prestartfcconsistsgrp command to prepare a FlashCopy Consistency Group. As with 9.12.5, Preparing (pre-triggering) the FlashCopy mapping on page 541, this command flushes the cache of any data that is destined for the source volume and forces the cache into the write-through mode until the mapping is started. The difference is that this command prepares a group of mappings (at a Consistency Group level) instead of one mapping. When you have assigned several mappings to a FlashCopy Consistency Group, you only have to issue a single prepare command for the whole group to prepare all of the mappings at one time.

542

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

Example 9-126 shows how we prepare the Consistency Groups for DB and Log and check the result. After the command has executed all of the FlashCopy maps that we have, all of them are in the prepared status and all the Consistency Groups are in the prepared status, too. Now we are ready to start the FlashCopy.
Example 9-126 Prepare a FlashCopy Consistency Group

IBM_2145:ITSO_SVC3:admin>prestartfcconsistgrp FCCG1 IBM_2145:ITSO_SVC3:admin>prestartfcconsistgrp FCCG2 IBM_2145:ITSO_SVC3:admin>lsfcconsistgrp FCCG1 id 1 name FCCG1 status prepared autodelete off FC_mapping_id 0 FC_mapping_name DB_Map1 FC_mapping_id 1 FC_mapping_name Log_Map1 IBM_2145:ITSO_SVC3:admin>lsfcconsistgrp id name status 1 FCCG1 prepared 2 FCCG2 prepared

9.12.7 Starting (triggering) FlashCopy mappings


The startfcmap command is used to start a single FlashCopy mapping. When invoked, a point-in-time copy of the source volume is created on the target volume. When the FlashCopy mapping is triggered, it enters the Copying state. The way that the copy proceeds depends on the background copy rate attribute of the mapping. If the mapping is set to 0 (NOCOPY), only data that is subsequently updated on the source will be copied to the destination. We suggest that you use this scenario as a backup copy while the mapping exists in the Copying state. If the copy is stopped, the destination is unusable. If you want to end up with a duplicate copy of the source at the destination, set the background copy rate greater than 0. This way, the system copies all of the data (even unchanged data) to the destination and eventually reaches the idle_or_copied state. After this data is copied, you can delete the mapping and have a usable point-in-time copy of the source at the destination. In Example 9-127, after the FlashCopy is started, App_Map1 changes to copying status.
Example 9-127 Start App_Map1

IBM_2145:ITSO_SVC3:admin>startfcmap App_Map1 IBM_2145:ITSO_SVC3:admin>lsfcmap id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id group_name status progress copy_rate clean_progress incremental partner_FC_id partner_FC_name restoring start_time rc_controlled 0 DB_Map1 3 DB_Source 4 DB_Target1 1 FCCG1 prepared 0 50 0 off no no

Chapter 9. SAN Volume Controller operations using the command-line interface

543

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

1 Log_Map1 6 Log_Source FCCG1 prepared 0 50 0 no no 2 App_Map1 9 App_Source 10 copying 0 50 100 no 110929113407 no 3 DB_Map2 3 DB_Source FCCG2 prepared 0 50 0 no no 4 Log_Map2 6 Log_Source FCCG2 prepared 0 50 0 no no IBM_2145:ITSO_SVC3:admin>lsfcmap App_Map1 id 2 name App_Map1 source_vdisk_id 9 source_vdisk_name App_Source target_vdisk_id 10 target_vdisk_name App_Target1 group_id group_name status copying progress 0 copy_rate 50 start_time 110929113407 dependent_mappings 0 autodelete off clean_progress 100 clean_rate 50 incremental off difference 0 grain_size 256 IO_group_id 0 IO_group_name io_grp0 partner_FC_id partner_FC_name restoring no rc_controlled no

7 off

Log_Target1

App_Target1 off 5 off 8 off Log_Target2 2 DB_Target2 2

9.12.8 Starting (triggering) FlashCopy Consistency Group


We execute the startfcconsistgrp command, as shown in Example 9-128, and afterward the database can be resumed. We have created two point-in-time consistent copies of the DB and Log volumes. After execution, the Consistency Group and the FlashCopy maps are all in the copying status.
Example 9-128 Start FlashCopy Consistency Group

IBM_2145:ITSO_SVC3:admin>startfcconsistgrp FCCG1 IBM_2145:ITSO_SVC3:admin>startfcconsistgrp FCCG2 IBM_2145:ITSO_SVC3:admin>lsfcconsistgrp FCCG1 id 1 name FCCG1 status copying autodelete off

544

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

FC_mapping_id 0 FC_mapping_name DB_Map1 FC_mapping_id 1 FC_mapping_name Log_Map1 IBM_2145:ITSO_SVC3:admin> IBM_2145:ITSO_SVC3:admin>lsfcconsistgrp id name status 1 FCCG1 copying 2 FCCG2 copying

9.12.9 Monitoring the FlashCopy progress


To monitor the background copy progress of the FlashCopy mappings, we issue the lsfcmapprogress command for each FlashCopy mapping. Alternatively, you can also query the copy progress by using the lsfcmap command. As shown in Example 9-129, both DB_Map1 return information that the background copy is 23% completed, Log_Map1 return information that the background copy is 41% completed, DB_Map2 return information that the background copy is 5% and Log_Map2 return information that the background copy is 4% completed.
Example 9-129 Monitoring background copy progress

IBM_2145:ITSO_SVC3:admin>lsfcmapprogress id progress 0 23 IBM_2145:ITSO_SVC3:admin>lsfcmapprogress id progress 1 41 IBM_2145:ITSO_SVC3:admin>lsfcmapprogress id progress 4 4 IBM_2145:ITSO_SVC3:admin>lsfcmapprogress id progress 3 5 IBM_2145:ITSO_SVC3:admin>lsfcmapprogress id progress 2 10

DB_Map1

Log_Map1

Log_Map2

DB_Map2

App_Map1

When the background copy has completed, the FlashCopy mapping enters the idle_or_copied state. When all FlashCopy mappings in a Consistency Group enter this status, the Consistency Group will be at idle_or_copied status. When in this state, the FlashCopy mapping can be deleted and the target disk can be used independently if, for example, another target disk is to be used for the next FlashCopy of the particular source volume.

9.12.10 Stopping the FlashCopy mapping


The stopfcmap command is used to stop a FlashCopy mapping. This command allows you to stop an active (copying) or suspended mapping. When executed, this command stops a single FlashCopy mapping.

Chapter 9. SAN Volume Controller operations using the command-line interface

545

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

Tip: In a Multiple Target FlashCopy environment, if you want to stop a mapping or group, consider whether you want to keep any of the dependent mappings. If not, issue the stop command with the force parameter, which will stop all of the dependent maps and negate the need for the stopping copy process to run. When a FlashCopy mapping is stopped, the target volume becomes invalid and is set offline by the SVC. The FlashCopy mapping needs to be prepared again or retriggered to bring the target volume online again. Important: Only stop a FlashCopy mapping when the data on the target volume is not in use, or when you want to modify the FlashCopy mapping. When a FlashCopy mapping is stopped, the target volume becomes invalid and is set offline by the SVC, if the mapping is in the Copying state and progress=100. Example 9-130 shows how to stop the App_Map1 FlashCopy. The status of App_Map1 has changed to idle_or_copied.
Example 9-130 Stop APP_Map1 FlashCopy

IBM_2145:ITSO_SVC3:admin>stopfcmap App_Map1 IBM_2145:ITSO_SVC3:admin>lsfcmap App_Map1 id 2 name App_Map1 source_vdisk_id 9 source_vdisk_name App_Source target_vdisk_id 10 target_vdisk_name App_Target1 group_id group_name status idle_or_copied progress 100 copy_rate 50 start_time 110929113407 dependent_mappings 0 autodelete off clean_progress 100 clean_rate 50 incremental off difference 100 grain_size 256 IO_group_id 0 IO_group_name io_grp0 partner_FC_id partner_FC_name restoring no rc_controlled no

9.12.11 Stopping the FlashCopy Consistency Group


The stopfcconsistgrp command is used to stop any active FlashCopy Consistency Group. It stops all mappings in a Consistency Group. When a FlashCopy Consistency Group is stopped for all mappings that are not 100% copied, the target volumes become invalid and 546
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

are set offline by the SVC. The FlashCopy Consistency Group needs to be prepared again and restarted to bring the target volumes online again. Important: Only stop a FlashCopy mapping when the data on the target volume is not in use, or when you want to modify the FlashCopy Consistency Group. When a Consistency Group is stopped, the target volume might become invalid and set offline by the SVC, depending on the state of the mapping. As shown in Example 9-131, we stop the FCCG1 and FCCG2 Consistency Groups. The status of the two Consistency Groups has changed to stopped. Most of the FlashCopy mapping relations now have the status stopped. As you can see, several of them have already completed the copy operation and are now in a status of idle_or_copied.
Example 9-131 Stop FCCG1 and FCCG2 Consistency Groups

IBM_2145:ITSO_SVC3:admin>stopfcconsistgrp FCCG1 IBM_2145:ITSO_SVC3:admin>stopfcconsistgrp FCCG2 IBM_2145:ITSO_SVC3:admin>lsfcconsistgrp id name status 1 FCCG1 idle_or_copied 2 FCCG2 idle_or_copied IBM_2145:ITSO_SVC3:admin>lsfcmap -delim , id,name,source_vdisk_id,source_vdisk_name,target_vdisk_id,target_vdisk_name,group_ id,group_name,status,progress,copy_rate,clean_progress,incremental,partner_FC_id,p artner_FC_name,restoring,start_time,rc_controlled 0,DB_Map1,3,DB_Source,4,DB_Target1,1,FCCG1,idle_or_copied,100,50,100,off,,,no,1109 29113806,no 1,Log_Map1,6,Log_Source,7,Log_Target1,1,FCCG1,idle_or_copied,100,50,100,off,,,no,1 10929113806,no 2,App_Map1,9,App_Source,10,App_Target1,,,idle_or_copied,100,50,100,off,,,no,110929 113407,no 3,DB_Map2,3,DB_Source,5,DB_Target2,2,FCCG2,idle_or_copied,100,50,100,off,,,no,1109 29113806,no 4,Log_Map2,6,Log_Source,8,Log_Target2,2,FCCG2,idle_or_copied,100,50,100,off,,,no,1 10929113806,no

9.12.12 Deleting the FlashCopy mapping


To delete a FlashCopy mapping, we use the rmfcmap command. When the command is executed, it attempts to delete the specified FlashCopy mapping. If the FlashCopy mapping is stopped, the command fails unless the -force flag is specified. If the mapping is active (copying), it must first be stopped before it can be deleted. Deleting a mapping only deletes the logical relationship between the two volumes. However, when issued on an active FlashCopy mapping using the -force flag, the delete renders the data on the FlashCopy mapping target volume as inconsistent. Tip: If you want to use the target volume as a normal volume, monitor the background copy progress until it is complete (100% copied) and, then, delete the FlashCopy mapping. Another option is to set the -autodelete option when creating the FlashCopy mapping.

Chapter 9. SAN Volume Controller operations using the command-line interface

547

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

As shown in Example 9-132, we delete App_Map1.


Example 9-132 Delete App_Map1

IBM_2145:ITSO_SVC3:admin>rmfcmap App_Map1

9.12.13 Deleting the FlashCopy Consistency Group


The rmfcconsistgrp command is used to delete a FlashCopy Consistency Group. When executed, this command deletes the specified Consistency Group. If there are mappings that are members of the group, the command fails unless the -force flag is specified. If you want to delete all of the mappings in the Consistency Group as well, first delete the mappings and then delete the Consistency Group. As shown in Example 9-133, we delete all of the maps and Consistency Groups and then check the result.
Example 9-133 Remove fcmaps and fcconsistgrp

IBM_2145:ITSO_SVC3:admin>rmfcmap DB_Map1 IBM_2145:ITSO_SVC3:admin>rmfcmap DB_Map2 IBM_2145:ITSO_SVC3:admin>rmfcmap Log_Map1 IBM_2145:ITSO_SVC3:admin>rmfcmap Log_Map2 IBM_2145:ITSO_SVC3:admin>rmfcconsistgrp FCCG1 IBM_2145:ITSO_SVC3:admin>rmfcconsistgrp FCCG2 IBM_2145:ITSO_SVC3:admin>lsfcconsistgrp IBM_2145:ITSO_SVC3:admin>lsfcmap IBM_2145:ITSO_SVC3:admin>

9.12.14 Migrating a volume to a thin-provisioned volume


Use the following scenario to migrate a volume to a thin-provisioned volume: 1. Create a thin-provisioned space-efficient target volume with exactly the same size as the volume that you want to migrate. Example 9-134 shows the details of volume with ID 11. It has been created as a thin-provisioned volume with the same size of App_Source volume.
Example 9-134 lsvdisk 8 command IBM_2145:ITSO_SVC3:admin>lsvdisk 11 id 11 name App_Source_SE IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 1 mdisk_grp_name Multi_Tier_Pool

548

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

capacity 10.00GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018281BEE00000000000000B throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 se_copy_count 1 filesystem mirror_write_priority latency copy_id 0 status online sync yes primary yes mdisk_grp_id 1 mdisk_grp_name Multi_Tier_Pool type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 221.17MB free_capacity 220.77MB overallocation 4629 autoexpand on warning 80 grainsize 32 se_copy yes easy_tier on easy_tier_status active tier generic_ssd tier_capacity 0.00MB tier generic_hdd tier_capacity 221.17MB

2. Define a FlashCopy mapping in which the non thin-provisioned volume is the source and the thin-provisioned volume is the target. Specify a copy rate as high as possible and activate the -autodelete option for the mapping; see Example 9-135. Example 9-135 mkfcmap
IBM_2145:ITSO_SVC3:admin>mkfcmap -source App_Source -target App_Source_SE -name MigrtoThinProv -copyrate 100 -autodelete FlashCopy Mapping, id [0], successfully created IBM_2145:ITSO_SVC3:admin>lsfcmap 0 id 0 name MigrtoThinProv source_vdisk_id 9

Chapter 9. SAN Volume Controller operations using the command-line interface

549

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

source_vdisk_name App_Source target_vdisk_id 11 target_vdisk_name App_Source_SE group_id group_name status idle_or_copied progress 0 copy_rate 100 start_time dependent_mappings 0 autodelete on clean_progress 100 clean_rate 50 incremental off difference 100 grain_size 256 IO_group_id 0 IO_group_name io_grp0 partner_FC_id partner_FC_name restoring no rc_controlled no

3. Run the prestartfcmap command and the lsfcmap MigrtoThinProv command, as shown in Example 9-136. Example 9-136 prestartfcmap
IBM_2145:ITSO_SVC3:admin>prestartfcmap MigrtoThinProv IBM_2145:ITSO_SVC3:admin>lsfcmap MigrtoThinProv id 0 name MigrtoThinProv source_vdisk_id 9 source_vdisk_name App_Source target_vdisk_id 11 target_vdisk_name App_Source_SE group_id group_name status prepared progress 0 copy_rate 100 start_time dependent_mappings 0 autodelete on clean_progress 0 clean_rate 50 incremental off difference 100 grain_size 256 IO_group_id 0 IO_group_name io_grp0 partner_FC_id partner_FC_name restoring no rc_controlled no

4. Run the startfcmap command, as shown in Example 9-137 on page 551.

550

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

Example 9-137 startfcmap command IBM_2145:ITSO_SVC3:admin>startfcmap MigrtoThinProv

5. Monitor the copy process using the lsfcmapprogress command, as shown in Example 9-138.
Example 9-138 lsfcmapprogress command IBM_2145:ITSO_SVC3:admin>lsfcmapprogress MigrtoThinProv id progress 0 67

6. The FlashCopy mapping has been deleted automatically, as shown in Example 9-139.
Example 9-139 lsfcmap command IBM_2145:ITSO_SVC3:admin>lsfcmap MigrtoThinProv id 0 name MigrtoThinProv source_vdisk_id 9 source_vdisk_name App_Source target_vdisk_id 11 target_vdisk_name App_Source_SE group_id group_name status copying progress 67 copy_rate 100 start_time 110929135848 dependent_mappings 0 autodelete on clean_progress 100 clean_rate 50 incremental off difference 100 grain_size 256 IO_group_id 0 IO_group_name io_grp0 partner_FC_id partner_FC_name restoring no rc_controlled no IBM_2145:ITSO_SVC3:admin>lsfcmapprogress MigrtoThinProv CMMVC5804E The action failed because an object that was specified in the command does not exist. IBM_2145:ITSO_SVC3:admin>

An independent copy of the source volume (App_Source) has been created. The migration has completed, as shown in Example 9-140.
Example 9-140 lsvdisk App_Source IBM_2145:ITSO_SVC3:admin>lsvdisk App_Source id 9 name App_Source IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 1 mdisk_grp_name Multi_Tier_Pool

Chapter 9. SAN Volume Controller operations using the command-line interface

551

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

capacity 10.00GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018281BEE000000000000009 throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 se_copy_count 0 filesystem mirror_write_priority latency copy_id 0 status online sync yes primary yes mdisk_grp_id 1 mdisk_grp_name Multi_Tier_Pool type striped mdisk_id mdisk_name fast_write_state empty used_capacity 10.00GB real_capacity 10.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize se_copy no easy_tier on easy_tier_status active tier generic_ssd tier_capacity 0.00MB tier generic_hdd tier_capacity 10.00GB

Real size: Independently of what you defined as the real size of the target thin-provisioned volume, the real size will be at least the capacity of the source volume. To migrate a thin-provisioned volume to a fully allocated volume, you can follow the same scenario.

9.12.15 Reverse FlashCopy


You can also have a reverse FlashCopy mapping without having to remove the original FlashCopy mapping, and without restarting a FlashCopy mapping from the beginning.

552

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

In Example 9-141, FCMAP_1 is the forward FlashCopy mapping, and FCMAP_rev_1 is a reverse FlashCopy mapping. We have also a cascade FCMAP_2 where its source is FCMAP_1s target volume, and its target is a different volume named Volume_FC_T1. In our example, after creating the environment, we started the FCMAP_1 and later FCMAP_2. As an example we started FCMAP_rev_1 without specifying the -restore parameter to show why we have to use it, and to show the message issued if you do not use it: CMMVC6298E The command failed because a target VDisk has dependent FlashCopy mappings. When starting a reverse FlashCopy mapping, you must use the -restore option to indicate that the user wants to overwrite the data on the source disk of the forward mapping.
Example 9-141 Reverse FlashCopy

IBM_2145:ITSO_SVC3:admin>lsvdisk id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state se_copy_count RC_change 3 Volume_FC_S 0 io_grp0 online 1 Multi_Tier_Pool 10.00GB striped 60050768018281BEE000000000000003 0 1 empty 0 0 no 4 Volume_FC_T_S1 0 io_grp0 online 1 Multi_Tier_Pool 10.00GB striped 60050768018281BEE000000000000004 0 1 empty 0 0 no 5 Volume_FC_T1 0 io_grp0 online 1 Multi_Tier_Pool 10.00GB striped 60050768018281BEE000000000000005 0 1 empty 0 0 no IBM_2145:ITSO_SVC3:admin>mkfcmap -source Volume_FC_S -target Volume_FC_T_S1 -name FCMAP_1 -copyrate 50 FlashCopy Mapping, id [0], successfully created IBM_2145:ITSO_SVC3:admin>mkfcmap -source Volume_FC_T_S1 -target Volume_FC_S -name FCMAP_rev_1 -copyrate 50 FlashCopy Mapping, id [1], successfully created IBM_2145:ITSO_SVC3:admin>mkfcmap -source Volume_FC_T_S1 -target Volume_FC_T1 -name FCMAP_2 -copyrate 50 FlashCopy Mapping, id [2], successfully created IBM_2145:ITSO_SVC3:admin>lsfcmap id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id group_name status progress copy_rate clean_progress incremental partner_FC_id partner_FC_name restoring start_time rc_controlled 0 FCMAP_1 3 Volume_FC_S 4 Volume_FC_T_S1 idle_or_copied 0 50 100 off 1 FCMAP_rev_1 no no 1 FCMAP_rev_1 4 Volume_FC_T_S1 3 Volume_FC_S idle_or_copied 0 50 100 off 0 FCMAP_1 no no 2 FCMAP_2 4 Volume_FC_T_S1 5 Volume_FC_T1 idle_or_copied 0 50 100 off no no
Chapter 9. SAN Volume Controller operations using the command-line interface

553

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

IBM_2145:ITSO_SVC3:admin>startfcmap -prep FCMAP_1 IBM_2145:ITSO_SVC3:admin>startfcmap -prep FCMAP_2 IBM_2145:ITSO_SVC3:admin>lsfcmap id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id group_name status progress copy_rate clean_progress incremental partner_FC_id partner_FC_name restoring start_time rc_controlled 0 FCMAP_1 3 Volume_FC_S 4 Volume_FC_T_S1 copying 0 50 100 off 1 FCMAP_rev_1 no no 1 FCMAP_rev_1 4 Volume_FC_T_S1 3 Volume_FC_S idle_or_copied 0 50 100 off 0 FCMAP_1 no no 2 FCMAP_2 4 Volume_FC_T_S1 5 Volume_FC_T1 copying 4 50 100 off no 110929143739 no IBM_2145:ITSO_SVC3:admin>startfcmap -prep FCMAP_rev_1 CMMVC6298E The command failed because a target VDisk has dependent FlashCopy mappings. IBM_2145:ITSO_SVC3:admin>startfcmap -prep -restore FCMAP_rev_1 IBM_2145:ITSO_SVC3:admin>lsfcmap id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id group_name status progress copy_rate clean_progress incremental partner_FC_id partner_FC_name restoring start_time rc_controlled 0 FCMAP_1 3 Volume_FC_S 4 Volume_FC_T_S1 copying 43 100 56 off 1 FCMAP_rev_1 no 110929151911 no 1 FCMAP_rev_1 4 Volume_FC_T_S1 3 Volume_FC_S copying 56 100 43 off 0 FCMAP_1 yes 110929152030 no 2 FCMAP_2 4 Volume_FC_T_S1 5 Volume_FC_T1 copying 37 100 100 off no 110929151926 no As you can see in Example 9-141 on page 553, FCMAP_rev_1 shows a restoring value of yes while the FlashCopy mapping is copying. After it has finished copying, the restoring value field will change to no.

9.12.16 Split-stopping of FlashCopy maps


The stopfcmap command has a -split option. This option allows the source target of a map, which is 100% complete, to be removed from the head of a cascade when the map is stopped. For example, if we have four volumes in a cascade (A B C D), and the map A B is 100% complete, then using the stopfcmap -split mapAB command results in mapAB becoming idle_copied and the remaining cascade becomes B C D.

554

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

Without the -split option, volume A remains at the head of the cascade (A C D). Consider this sequence of steps: 1. User takes a backup using the mapping A B. A is the production volume; B is a backup. 2. At a later point, the user experiences corruption on A and so reverses the mapping B A. 3. The user then takes another backup from the production disk A, resulting in the cascade B A C. Stopping A B without the -split option results in the cascade B C. Note that the backup disk B is now at the head of this cascade. When the user next wants to take a backup to B, the user can still start mapping A B (using the -restore flag), but the user cannot then reverse the mapping to A (B A or C A). Stopping A B with the -split option results in the cascade A C. This action does not result in the same problem, because production disk A is at the head of the cascade instead of the backup disk B.

9.13 Metro Mirror operation


Note: This example is for intercluster operations only. If you want to set up intracluster operations, we highlight those parts of the following procedure that you do not need to perform. In the following scenario, we set up an intercluster Metro Mirror relationship between the SVC system ITSO_SVC1 primary site and the SVC system ITSO_SVC4 at the secondary site. Table 9-3 shows the details of the volumes.
Table 9-3 Volume details Content of volume Database files Database log files Application files Volumes at primary site MM_DB_Pri MM_DBLog_Pri MM_App_Pri Volumes at secondary site MM_DB_Sec MM_DBLog_Sec MM_App_Sec

Because data consistency is needed across the MM_DB_Pri and MM_DBLog_Pri volumes, a CG_WIN2K3_MM Consistency Group is created to handle Metro Mirror relationships for them. Because in this scenario application files are independent of the database, a stand-alone Metro Mirror relationship is created for the MM_App_Pri volume. Figure 9-7 on page 556 illustrates the Metro Mirror setup.

Chapter 9. SAN Volume Controller operations using the command-line interface

555

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 9-7 Metro Mirror scenario

9.13.1 Setting up Metro Mirror


In the following section, we assume that the source and target volumes have already been created and that the inter-switch links (ISLs) and zoning are in place, enabling the SVC clustered systems to communicate. To set up the Metro Mirror, perform the following steps: 1. Create an SVC partnership between ITSO_SVC1 and ITSO_SVC4 on both SVC clustered systems. 2. Create a Metro Mirror Consistency Group: Name CG_W2K3_MM 3. Create the Metro Mirror relationship for MM_DB_Pri: Master MM_DB_Pri Auxiliary MM_DB_Sec Auxiliary SVC system ITSO_SVC4 Name MMREL1 Consistency Group CG_W2K3_MM Master MM_DBLog_Pri Auxiliary MM_DBLog_Sec Auxiliary SVC system ITSO_SVC4 Name MMREL2 Consistency Group CG_W2K3_MM

4. Create the Metro Mirror relationship for MM_DBLog_Pri:

556

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

5. Create the Metro Mirror relationship for MM_App_Pri: Master MM_App_Pri Auxiliary MM_App_Sec Auxiliary SVC system ITSO_SVC4 Name MMREL3

In the following section, we perform each step by using the CLI.

9.13.2 Creating an SVC partnership between ITSO_SVC1 and ITSO_SVC4


We create the SVC partnership on both systems. Intracluster Metro Mirror: If you are creating an intracluster Metro Mirror, do not perform the next step; instead, go to 9.13.3, Creating a Metro Mirror Consistency Group on page 560.

Preverification
To verify that both systems can communicate with each other, use the lspartnershipcandidate command. As shown in Example 9-142, ITSO_SVC4 is an eligible SVC system candidate at ITSO_SVC1 for the SVC system partnership, and vice versa. Therefore, both systems are communicating with each other.
Example 9-142 Listing the available SVC systems for partnership

IBM_2145:ITSO_SVC1:admin>lspartnershipcandidate id configured name 0000020061C06FCA no ITSO_SVC4 000002006AC03A42 no ITSO_SVC2 0000020060A06FB8 no ITSO_SVC3 00000200A0C006B2 no ITSO-Storwize-V7000-2 IBM_2145:ITSO_SVC4:admin>lspartnershipcandidate id configured name 000002006AC03A42 no ITSO_SVC2 0000020060A06FB8 no ITSO_SVC3 00000200A0C006B2 no ITSO-Storwize-V7000-2 000002006BE04FC4 no ITSO_SVC1

Example 9-143 shows the output of the lspartnership and lssystem commands, before setting up the Metro Mirror relationship. We show them so that you can compare with the same relationship after setting up the Metro Mirror relationship. From SVC 6.3 you may create a partnership between SVC system and IBM Storwize V7000 system, be aware that to do it, you need to change the layer parameter on IBM Storwize V7000 system, it must be changed from storage to replication with the chsystem command. This parameter can not be changed on SVC system, it is fixed to appliance.
Example 9-143 Pre-verification of system configuration IBM_2145:ITSO_SVC1:admin>lspartnership id name location partnership bandwidth 000002006BE04FC4 ITSO_SVC1 local

Chapter 9. SAN Volume Controller operations using the command-line interface

557

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

IBM_2145:ITSO_SVC4:admin>lspartnership id name location partnership bandwidth 0000020061C06FCA ITSO_SVC4 local IBM_2145:ITSO_SVC1:admin>lssystem id 000002006BE04FC4 name ITSO_SVC1 location local partnership bandwidth total_mdisk_capacity 766.5GB space_in_mdisk_grps 766.5GB space_allocated_to_vdisks 0.00MB total_free_space 766.5GB total_vdiskcopy_capacity 0.00MB total_used_capacity 0.00MB total_overallocation 0 total_vdisk_capacity 0.00MB total_allocated_extent_capacity 1.50GB statistics_status on statistics_frequency 15 cluster_locale en_US time_zone 520 US/Pacific code_level 6.3.0.0 (build 54.0.1109090000) console_IP 10.18.228.81:443 id_alias 000002006BE04FC4 gm_link_tolerance 300 gm_inter_cluster_delay_simulation 0 gm_intra_cluster_delay_simulation 0 gm_max_host_delay 5 email_reply email_contact email_contact_primary email_contact_alternate email_contact_location email_contact2 email_contact2_primary email_contact2_alternate email_state stopped inventory_mail_interval 0 cluster_ntp_IP_address cluster_isns_IP_address iscsi_auth_method chap iscsi_chap_secret passw0rd auth_service_configured no auth_service_enabled no auth_service_url auth_service_user_name auth_service_pwd_set no auth_service_cert_set no auth_service_type tip relationship_bandwidth_limit 25 tier generic_ssd tier_capacity 0.00MB tier_free_capacity 0.00MB tier generic_hdd tier_capacity 766.50GB tier_free_capacity 766.50GB has_nas_key no

558

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

layer appliance IBM_2145:ITSO_SVC4:admin>lssystem id 0000020061C06FCA name ITSO_SVC4 location local partnership bandwidth total_mdisk_capacity 768.0GB space_in_mdisk_grps 0 space_allocated_to_vdisks 0.00MB total_free_space 768.0GB total_vdiskcopy_capacity 0.00MB total_used_capacity 0.00MB total_overallocation 0 total_vdisk_capacity 0.00MB total_allocated_extent_capacity 0.00MB statistics_status on statistics_frequency 15 cluster_locale en_US time_zone 520 US/Pacific code_level 6.3.0.0 (build 54.0.1109090000) console_IP 10.18.228.84:443 id_alias 0000020061C06FCA gm_link_tolerance 300 gm_inter_cluster_delay_simulation 0 gm_intra_cluster_delay_simulation 0 gm_max_host_delay 5 email_reply email_contact email_contact_primary email_contact_alternate email_contact_location

email_contact2 email_contact2_primary email_contact2_alternate email_state stopped


inventory_mail_interval 0 cluster_ntp_IP_address cluster_isns_IP_address iscsi_auth_method none iscsi_chap_secret auth_service_configured no auth_service_enabled no auth_service_url auth_service_user_name auth_service_pwd_set no auth_service_cert_set no auth_service_type tip relationship_bandwidth_limit 25 tier generic_ssd tier_capacity 0.00MB tier_free_capacity 0.00MB tier generic_hdd tier_capacity 0.00MB tier_free_capacity 0.00MB has_nas_key no layer appliance

Chapter 9. SAN Volume Controller operations using the command-line interface

559

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

Partnership between clustered systems


In Example 9-144, a partnership is created between ITSO_SVC1 and ITSO_SVC4, specifying 50 MBps bandwidth to be used for the background copy. To check the status of the newly created partnership, issue the lspartnership command. Also, notice that the new partnership is only partially configured. It remains partially configured until the Metro Mirror relationship is created on the other node.
Example 9-144 Creating the partnership from ITSO_SVC1 to ITSO_SVC4 and verifying the partnership

IBM_2145:ITSO_SVC1:admin>mkpartnership -bandwidth 50 ITSO_SVC4 IBM_2145:ITSO_SVC1:admin>lspartnership id name location partnership bandwidth 000002006BE04FC4 ITSO_SVC1 local 0000020061C06FCA ITSO_SVC4 remote partially_configured_local 50 In Example 9-145, the partnership is created between ITSO_SVC4 back to ITSO_SVC1, specifying the bandwidth to be used for a background copy of 50 MBps. After creating the partnership, verify that the partnership is fully configured on both systems by reissuing the lspartnership command.
Example 9-145 Creating the partnership from ITSO_SVC4 to ITSO_SVC1 and verifying the partnership

IBM_2145:ITSO_SVC4:admin>mkpartnership -bandwidth 50 ITSO_SVC1 IBM_2145:ITSO_SVC4:admin>lspartnership id name location partnership bandwidth 0000020061C06FCA ITSO_SVC4 local 000002006BE04FC4 ITSO_SVC1 remote fully_configured 50

9.13.3 Creating a Metro Mirror Consistency Group


In Example 9-146, we create the Metro Mirror Consistency Group using the mkrcconsistgrp command. This Consistency Group will be used for the Metro Mirror relationships of the database volumes named MM_DB_Pri and MM_DBLog_Pri. The Consistency Group is named CG_W2K3_MM.
Example 9-146 Creating the Metro Mirror Consistency Group CG_W2K3_MM

IBM_2145:ITSO_SVC1:admin>mkrcconsistgrp -cluster ITSO_SVC4 -name CG_W2K3_MM RC Consistency Group, id [0], successfully created IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp id name master_cluster_id master_cluster_name aux_cluster_id aux_cluster_name primary state relationship_count copy_type cycling_mode 0 CG_W2K3_MM 000002006BE04FC4 ITSO_SVC1 0000020061C06FCA ITSO_SVC4 empty 0 empty_group none

9.13.4 Creating the Metro Mirror relationships


In Example 9-147 on page 561, we create the Metro Mirror relationships MMREL1 and MMREL2, for MM_DB_Pri and MM_DBLog_Pri. Also, we make them members of the Metro Mirror Consistency Group CG_W2K3_MM. We use the lsvdisk command to list all of the

560

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

volumes in the ITSO_SVC1 system, and we then use the lsrcrelationshipcandidate command to show the volumes in the ITSO_SVC4 system. By using this command, we check the possible candidates for MM_DB_Pri. After checking all of these conditions, we use the mkrcrelationship command to create the Metro Mirror relationship. To verify the newly created Metro Mirror relationships, list them with the lsrcrelationship command.
Example 9-147 Creating Metro Mirror relationships MMREL1 and MMREL2 IBM_2145:ITSO_SVC1:admin>lsvdisk -filtervalue name=MM* id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state se_copy_count RC_change 0 MM_DB_Pri 0 io_grp0 online 0 Pool_DS3500-1 10.00GB striped 6005076801AF813F1000000000000031 0 1 empty 0 0 no 1 MM_DBLog_Pri 0 io_grp0 online 0 Pool_DS3500-1 10.00GB striped 6005076801AF813F1000000000000032 0 1 empty 0 0 no 2 MM_App_Pri 0 io_grp0 online 0 Pool_DS3500-1 10.00GB striped 6005076801AF813F1000000000000033 0 1 empty 0 0 no IBM_2145:ITSO_SVC1:admin>lsrcrelationshipcandidate id vdisk_name 0 MM_DB_Pri 1 MM_DBLog_Pri 2 MM_App_Pri IBM_2145:ITSO_SVC1:admin>lsrcrelationshipcandidate -aux ITSO_SVC4 -master MM_DB_Pri id vdisk_name 0 MM_DB_Sec 1 MM_DBLog_Sec 2 MM_App_Sec IBM_2145:ITSO_SVC1:admin>mkrcrelationship -master MM_DB_Pri -aux MM_DB_Sec -cluster ITSO_SVC4 -consistgrp CG_W2K3_MM -name MMREL1 RC Relationship, id [0], successfully created IBM_2145:ITSO_SVC1:admin>mkrcrelationship -master MM_Log_Pri -aux MM_Log_Sec -cluster ITSO_SVC4 -consistgrp CG_W2K3_MM -name MMREL2 RC Relationship, id [3], successfully created IBM_2145:ITSO_SVC1:admin>lsrcrelationship id name master_cluster_id master_cluster_name master_vdisk_id master_vdisk_name aux_cluster_id aux_cluster_name aux_vdisk_id aux_vdisk_name primary consistency_group_id consistency_group_name state bg_copy_priority progress copy_type cycling_mode 0 MMREL1 000002006BE04FC4 ITSO_SVC1 0 MM_DB_Pri 0000020061C06FCA ITSO_SVC4 0 MM_DB_Sec master 0 CG_W2K3_MM inconsistent_stopped 50 0 metro none 3 MMREL2 000002006BE04FC4 ITSO_SVC1 3 MM_Log_Pri 0000020061C06FCA ITSO_SVC4 3 MM_Log_Sec master 0 CG_W2K3_MM inconsistent_stopped 50 0 metro none

9.13.5 Creating a stand-alone Metro Mirror relationship for MM_App_Pri


In Example 9-148 on page 562, we create the stand-alone Metro Mirror relationship MMREL3 for MM_App_Pri. After it is created, we check the status of this Metro Mirror relationship.
Chapter 9. SAN Volume Controller operations using the command-line interface

561

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

Notice that the state of MMREL3 is consistent_stopped. MMREL3 is in this state because it was created with the -sync option. The -sync option indicates that the secondary (auxiliary) volume is already synchronized with the primary (master) volume. Initial background synchronization is skipped when this option is used, even though the volumes are not actually synchronized in this scenario. We want to illustrate the option of pre-synchronized master and auxiliary volumes, before setting up the relationship. We have created the new relationship for MM_App_Sec using the -sync option. Tip: The -sync option is only used when the target volume has already mirrored all of the data from the source volume. By using this option, there is no initial background copy between the primary volume and the secondary volume. MMREL2 and MMREL1 are in the inconsistent_stopped state because they were not created with the -sync option, so their auxiliary volumes need to be synchronized with their primary volumes.
Example 9-148 Creating a stand-alone relationship and verifying it

IBM_2145:ITSO_SVC1:admin>mkrcrelationship -master MM_App_Pri -aux MM_App_Sec -sync -cluster ITSO_SVC4 -name MMREL3 RC Relationship, id [2], successfully created IBM_2145:ITSO_SVC1:admin>lsrcrelationship 2 id 2 name MMREL3 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 2 master_vdisk_name MM_App_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 2 aux_vdisk_name MM_App_Sec primary master consistency_group_id consistency_group_name state consistent_stopped bg_copy_priority 50 progress 100 freeze_time status online sync in_sync copy_type metro cycle_period_seconds 300 cycling_mode none master_change_vdisk_id master_change_vdisk_name aux_change_vdisk_id aux_change_vdisk_name

562

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

9.13.6 Starting Metro Mirror


Now that the Metro Mirror Consistency Group and relationships are in place, we are ready to use Metro Mirror relationships in our environment. When implementing Metro Mirror, the goal is to reach a consistent and synchronized state that can provide redundancy for a dataset if a failure occurs that affects the production site. In the following section, we show how to stop and start stand-alone Metro Mirror relationships and Consistency Groups.

Starting a stand-alone Metro Mirror relationship


In Example 9-149, we start a stand-alone Metro Mirror relationship named MMREL3. Because the Metro Mirror relationship was in the Consistent stopped state and no updates have been made to the primary volume, the relationship quickly enters the Consistent synchronized state.
Example 9-149 Starting the stand-alone Metro Mirror relationship

IBM_2145:ITSO_SVC1:admin>startrcrelationship MMREL3 IBM_2145:ITSO_SVC1:admin>lsrcrelationship MMREL3 id 2 name MMREL3 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 2 master_vdisk_name MM_App_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 2 aux_vdisk_name MM_App_Sec primary master consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type metro cycle_period_seconds 300 cycling_mode none master_change_vdisk_id master_change_vdisk_name aux_change_vdisk_id aux_change_vdisk_name

9.13.7 Starting a Metro Mirror Consistency Group


In Example 9-150 on page 564, we start the Metro Mirror Consistency Group CG_W2K3_MM. Because the Consistency Group was in the Inconsistent stopped state, it enters the Inconsistent copying state until the background copy has completed for all of the relationships in the Consistency Group.

Chapter 9. SAN Volume Controller operations using the command-line interface

563

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

Upon completion of the background copy, it enters the Consistent synchronized state.
Example 9-150 Starting the Metro Mirror Consistency Group

IBM_2145:ITSO_SVC1:admin>startrcconsistgrp CG_W2K3_MM IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp id name master_cluster_id master_cluster_name aux_cluster_id aux_cluster_name primary state relationship_count copy_type cycling_mode 0 CG_W2K3_MM 000002006BE04FC4 ITSO_SVC1 0000020061C06FCA ITSO_SVC4 master inconsistent_copying 2 metro none

9.13.8 Monitoring the background copy progress


To monitor the background copy progress, we can use the lsrcrelationship command. This command shows all of the defined Metro Mirror relationships if it is used without any arguments. In the command output, progress indicates the current background copy progress. Our Metro Mirror relationship is shown in Example 9-151. Using SNMP traps: Setting up SNMP traps for the SVC enables automatic notification when Metro Mirror Consistency Groups or relationships change state.
Example 9-151 Monitoring background copy progress example

IBM_2145:ITSO_SVC1:admin>lsrcrelationship MMREL1 id 0 name MMREL1 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 0 master_vdisk_name MM_DB_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 0 aux_vdisk_name MM_DB_Sec primary master consistency_group_id 0 consistency_group_name CG_W2K3_MM state inconsistent_copying bg_copy_priority 50 progress 81 freeze_time status online sync copy_type metro cycle_period_seconds 300 cycling_mode none master_change_vdisk_id master_change_vdisk_name aux_change_vdisk_id aux_change_vdisk_name IBM_2145:ITSO_SVC1:admin>lsrcrelationship MMREL2 id 3 name MMREL2 564
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 3 master_vdisk_name MM_Log_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 3 aux_vdisk_name MM_Log_Sec primary master consistency_group_id 0 consistency_group_name CG_W2K3_MM state inconsistent_copying bg_copy_priority 50 progress 82 freeze_time status online sync copy_type metro cycle_period_seconds 300 cycling_mode none master_change_vdisk_id master_change_vdisk_name aux_change_vdisk_id aux_change_vdisk_name When all Metro Mirror relationships have completed the background copy the Consistency Group enters the Consistent synchronized state, as shown in Example 9-152.
Example 9-152 Listing the Metro Mirror Consistency Group

IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_MM id 0 name CG_W2K3_MM master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 primary master state consistent_synchronized relationship_count 2 freeze_time status sync copy_type metro cycle_period_seconds 300 cycling_mode none RC_rel_id 0 RC_rel_name MMREL1 RC_rel_id 3 RC_rel_name MMREL2

Chapter 9. SAN Volume Controller operations using the command-line interface

565

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

9.13.9 Stopping and restarting Metro Mirror


Now that the Metro Mirror Consistency Group and relationships are running, in this section and in the following sections we describe how to stop, restart, and change the direction of the stand-alone Metro Mirror relationships and the Consistency Group.

9.13.10 Stopping a stand-alone Metro Mirror relationship


Example 9-153 shows how to stop the stand-alone Metro Mirror relationship, while enabling access (write I/O) to both the primary and secondary volumes. It also shows the relationship entering the Idling state.
Example 9-153 Stopping stand-alone Metro Mirror relationship and enabling access to the secondary

IBM_2145:ITSO_SVC1:admin>stoprcrelationship -access MMREL3 IBM_2145:ITSO_SVC1:admin>lsrcrelationship MMREL3 id 2 name MMREL3 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 2 master_vdisk_name MM_App_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 2 aux_vdisk_name MM_App_Sec primary consistency_group_id consistency_group_name state idling bg_copy_priority 50 progress freeze_time status sync in_sync copy_type metro cycle_period_seconds 300 cycling_mode none master_change_vdisk_id master_change_vdisk_name aux_change_vdisk_id aux_change_vdisk_name

9.13.11 Stopping a Metro Mirror Consistency Group


Example 9-154 shows how to stop the Metro Mirror Consistency Group without specifying the -access flag. The Consistency Group enters the Consistent stopped state.
Example 9-154 Stopping a Metro Mirror Consistency Group

IBM_2145:ITSO_SVC1:admin>stoprcconsistgrp CG_W2K3_MM IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_MM id 0 name CG_W2K3_MM master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 566
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 primary master state consistent_stopped relationship_count 2 freeze_time status sync in_sync copy_type metro cycle_period_seconds 300 cycling_mode none RC_rel_id 0 RC_rel_name MMREL1 RC_rel_id 3 RC_rel_name MMREL2 If, afterwards, we want to enable access (write I/O) to the secondary volume, we reissue the stoprcconsistgrp command specifying the -access flag. The Consistency Group transits to the Idling state as shown in Example 9-155.
Example 9-155 Stopping a Metro Mirror Consistency Group and enabling access to the secondary

IBM_2145:ITSO_SVC1:admin>stoprcconsistgrp -access CG_W2K3_MM IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_MM id 0 name CG_W2K3_MM master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 primary state idling relationship_count 2 freeze_time status sync in_sync copy_type metro cycle_period_seconds 300 cycling_mode none RC_rel_id 0 RC_rel_name MMREL1 RC_rel_id 3 RC_rel_name MMREL2

9.13.12 Restarting a Metro Mirror relationship in the Idling state


When restarting a Metro Mirror relationship in the Idling state, we must specify the copy direction. If any updates have been performed on either the master or the auxiliary volume, consistency will be compromised. Therefore, we must issue the command with the -force flag to restart a relationship, as shown in Example 9-156.
Example 9-156 Restarting a Metro Mirror relationship after updates in the Idling state

IBM_2145:ITSO_SVC1:admin>startrcrelationship -primary master -force MMREL3

Chapter 9. SAN Volume Controller operations using the command-line interface

567

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

IBM_2145:ITSO_SVC1:admin>lsrcrelationship MMREL3 id 2 name MMREL3 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 2 master_vdisk_name MM_App_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 2 aux_vdisk_name MM_App_Sec primary master consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type metro cycle_period_seconds 300 cycling_mode none master_change_vdisk_id master_change_vdisk_name aux_change_vdisk_id aux_change_vdisk_name

9.13.13 Restarting a Metro Mirror Consistency Group in the Idling state


When restarting a Metro Mirror Consistency Group in the Idling state, we must specify the copy direction. If any updates have been performed on either the master or the auxiliary volume in any of the Metro Mirror relationships in the Consistency Group, the consistency is compromised. Therefore, we must use the -force flag to start a relationship. If the -force flag is not used, the command fails. In Example 9-157, we change the copy direction by specifying the auxiliary volumes to become the primaries.
Example 9-157 Restarting a Metro Mirror relationship while changing the copy direction

IBM_2145:ITSO_SVC1:admin>startrcconsistgrp -force -primary aux CG_W2K3_MM IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_MM id 0 name CG_W2K3_MM master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 primary aux state consistent_synchronized relationship_count 2 freeze_time

568

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

status sync copy_type metro cycle_period_seconds 300 cycling_mode none RC_rel_id 0 RC_rel_name MMREL1 RC_rel_id 3 RC_rel_name MMREL2

9.13.14 Changing copy direction for Metro Mirror


In this section, we show how to change the copy direction of the stand-alone Metro Mirror relationship and the Consistency Group.

9.13.15 Switching copy direction for a Metro Mirror relationship


When a Metro Mirror relationship is in the Consistent synchronized state, we can change the copy direction for the relationship using the switchrcrelationship command, specifying the primary volume. If the specified volume is already a primary when you issue this command, then the command has no effect. In Example 9-158, we change the copy direction for the stand-alone Metro Mirror relationship by specifying the auxiliary volume to become the primary. Important: When the copy direction is switched, it is crucial that there is no outstanding I/O to the volume that transitions from the primary to the secondary, because all of the I/O will be inhibited to that volume when it becomes the secondary. Therefore, careful planning is required prior to using the switchrcrelationship command.
Example 9-158 Switching the copy direction for a Metro Mirror Consistency Group

IBM_2145:ITSO_SVC1:admin>lsrcrelationship MMREL3 id 2 name MMREL3 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 2 master_vdisk_name MM_App_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 2 aux_vdisk_name MM_App_Sec primary master consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync

Chapter 9. SAN Volume Controller operations using the command-line interface

569

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

copy_type metro cycle_period_seconds 300 cycling_mode none master_change_vdisk_id master_change_vdisk_name aux_change_vdisk_id aux_change_vdisk_name IBM_2145:ITSO_SVC1:admin>switchrcrelationship -primary aux MMREL3 IBM_2145:ITSO_SVC1:admin>lsrcrelationship MMREL3 id 2 name MMREL3 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 2 master_vdisk_name MM_App_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 2 aux_vdisk_name MM_App_Sec primary aux consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type metro cycle_period_seconds 300 cycling_mode none master_change_vdisk_id master_change_vdisk_name aux_change_vdisk_id aux_change_vdisk_name

9.13.16 Switching copy direction for a Metro Mirror Consistency Group


When a Metro Mirror Consistency Group is in the Consistent synchronized state, we can change the copy direction for the Consistency Group by using the switchrcconsistgrp command and specifying the primary volume. If the specified volume is already a primary when you issue this command, then the command has no effect. In Example 9-159 on page 571, we change the copy direction for the Metro Mirror Consistency Group by specifying the auxiliary volume to become the primary. Important: When the copy direction is switched, it is crucial that there is no outstanding I/O to the volume that transitions from primary to secondary, because all of the I/O will be inhibited when that volume becomes the secondary. Therefore, careful planning is required prior to using the switchrcconsistgrp command.

570

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

Example 9-159 Switching the copy direction for a Metro Mirror Consistency Group

IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_MM id 0 name CG_W2K3_MM master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 primary master state consistent_synchronized relationship_count 2 freeze_time status sync copy_type metro cycle_period_seconds 300 cycling_mode none RC_rel_id 0 RC_rel_name MMREL1 RC_rel_id 3 RC_rel_name MMREL2 IBM_2145:ITSO_SVC1:admin>switchrcconsistgrp -primary aux CG_W2K3_MM IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_MM id 0 name CG_W2K3_MM master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 primary aux state consistent_synchronized relationship_count 2 freeze_time status sync copy_type metro cycle_period_seconds 300 cycling_mode none RC_rel_id 0 RC_rel_name MMREL1 RC_rel_id 3 RC_rel_name MMREL2

9.13.17 Creating an SVC partnership among many clustered systems


Starting with SVC 5.1, you can have a clustered system partnership among many SVC systems. This capability allows you to create four configurations using a maximum of four connected systems: Star configuration Triangle configuration Fully connected configuration Daisy-chain configuration

Chapter 9. SAN Volume Controller operations using the command-line interface

571

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

In this section, we describe how to configure the SVC system partnership for each configuration. Important: To have a supported and working configuration, all SVC systems must be at level 5.1 or higher. In our scenarios, we configure the SVC partnership by referring to the clustered systems as A, B, C, and D: ITSO_SVC1 = A ITSO_SVC2 = B ITSO_SVC3 = C ITSO_SVC4 = D Example 9-160 shows the available systems for a partnership using the lsclustercandidate command on each system.
Example 9-160 Available clustered systems

IBM_2145:ITSO_SVC1:admin>lspartnershipcandidate id configured name 0000020061C06FCA no ITSO_SVC4 0000020060A06FB8 no ITSO_SVC3 000002006AC03A42 no ITSO_SVC2 IBM_2145:ITSO_SVC2:admin>lspartnershipcandidate id configured name 0000020061C06FCA no ITSO_SVC4 000002006BE04FC4 no ITSO_SVC1 0000020060A06FB8 no ITSO_SVC3 IBM_2145:ITSO_SVC3:admin>lspartnershipcandidate id configured name 000002006BE04FC4 no ITSO_SVC1 0000020061C06FCA no ITSO_SVC4 000002006AC03A42 no ITSO_SVC2 IBM_2145:ITSO_SVC4:admin>lspartnershipcandidate id configured name 000002006BE04FC4 no ITSO_SVC1 0000020060A06FB8 no ITSO_SVC3 000002006AC03A42 no ITSO_SVC2

9.13.18 Star configuration partnership


Figure 9-8 on page 573 shows the star configuration.

572

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

Figure 9-8 Star configuration

Example 9-161 shows the sequence of mkpartnership commands to execute to create a star configuration.
Example 9-161 Creating a star configuration using the mkpartnership command

From ITSO_SVC1 to multiple systems IBM_2145:ITSO_SVC1:admin>mkpartnership -bandwidth 50 ITSO_SVC2 IBM_2145:ITSO_SVC1:admin>mkpartnership -bandwidth 50 ITSO_SVC3 IBM_2145:ITSO_SVC1:admin>mkpartnership -bandwidth 50 ITSO_SVC4 From ITSO_SVC2 to ITSO_SVC1 IBM_2145:ITSO_SVC2:admin>mkpartnership -bandwidth 50 ITSO_SVC1 From ITSO_SVC3 to ITSO_SVC1 IBM_2145:ITSO_SVC3:admin>mkpartnership -bandwidth 50 ITSO_SVC1 From ITSO_SVC4 to ITSO_SVC1 IBM_2145:ITSO_SVC4:admin>mkpartnership -bandwidth 50 ITSO_SVC1 From ITSO_SVC1 IBM_2145:ITSO_SVC1:admin>lspartnership id name location partnership 000002006BE04FC4 ITSO_SVC1 local 000002006AC03A42 ITSO_SVC2 remote fully_configured 0000020060A06FB8 ITSO_SVC3 remote fully_configured 0000020061C06FCA ITSO_SVC4 remote fully_configured From ITSO_SVC2 IBM_2145:ITSO_SVC2:admin>lspartnership id name location partnership bandwidth
Chapter 9. SAN Volume Controller operations using the command-line interface

bandwidth 50 50 50

573

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

000002006AC03A42 ITSO_SVC2 000002006BE04FC4 ITSO_SVC1 From ITSO_SVC3

local remote

fully_configured

50

IBM_2145:ITSO_SVC3:admin>lspartnership id name location partnership bandwidth 0000020060A06FB8 ITSO_SVC3 local 000002006BE04FC4 ITSO_SVC1 remote fully_configured 50 From ITSO_SVC4 IBM_2145:ITSO_SVC4:admin>lspartnership id name location partnership bandwidth 0000020061C06FCA ITSO_SVC4 local 000002006BE04FC4 ITSO_SVC1 remote fully_configured 50 After the SVC partnership has been configured, you can configure any rcrelationship or rcconsistgrp that you need. Make sure that a single volume is only in one relationship.

Triangle configuration
Figure 9-9 shows the triangle configuration.

Figure 9-9 Triangle configuration

Example 9-162 shows the sequence of mkpartnership commands to execute to create a triangle configuration.
Example 9-162 Creating a triangle configuration

From ITSO_SVC1 to ITSO_SVC2 and ITSO_SVC3 IBM_2145:ITSO_SVC1:admin>mkpartnership -bandwidth 50 ITSO_SVC2 IBM_2145:ITSO_SVC1:admin>mkpartnership -bandwidth 50 ITSO_SVC3 IBM_2145:ITSO_SVC1:admin>lspartnership id name location partnership bandwidth 000002006BE04FC4 ITSO_SVC1 local 000002006AC03A42 ITSO_SVC2 remote partially_configured_local 50 0000020060A06FB8 ITSO_SVC3 remote partially_configured_local 50

574

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

From ITSO_SVC2 to ITSO_SVC1 and ITSO_SVC3 IBM_2145:ITSO_SVC2:admin>mkpartnership IBM_2145:ITSO_SVC2:admin>mkpartnership IBM_2145:ITSO_SVC2:admin>lspartnership id name bandwidth 000002006AC03A42 ITSO_SVC2 000002006BE04FC4 ITSO_SVC1 0000020060A06FB8 ITSO_SVC3 -bandwidth 50 ITSO_SVC1 -bandwidth 50 ITSO_SVC3 location partnership local remote remote

fully_configured 50 partially_configured_local 50

From ITSO_SVC3 to ITSO_SVC1 and ITSO_SVC2 IBM_2145:ITSO_SVC3:admin>mkpartnership -bandwidth 50 IBM_2145:ITSO_SVC3:admin>mkpartnership -bandwidth 50 IBM_2145:ITSO_SVC3:admin>lspartnership id name location partnership 0000020060A06FB8 ITSO_SVC3 local 000002006BE04FC4 ITSO_SVC1 remote fully_configured 000002006AC03A42 ITSO_SVC2 remote fully_configured ITSO_SVC1 ITSO_SVC2 bandwidth 50 50

After the SVC partnership has been configured, you can configure any rcrelationship or rcconsistgrp that you need. Make sure that a single volume is only in one relationship.

Fully connected configuration


Figure 9-10 shows the fully connected configuration.

Figure 9-10 Fully connected configuration

Example 9-163 shows the sequence of mkpartnership commands to execute to create a fully connected configuration.
Example 9-163 Creating a fully connected configuration

From ITSO_SVC1 to ITSO_SVC2, ITSO_SVC3 and ITSO_SVC4 IBM_2145:ITSO_SVC1:admin>mkpartnership -bandwidth 50 ITSO_SVC2

Chapter 9. SAN Volume Controller operations using the command-line interface

575

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

IBM_2145:ITSO_SVC1:admin>mkpartnership -bandwidth 50 ITSO_SVC3 IBM_2145:ITSO_SVC1:admin>mkpartnership -bandwidth 50 ITSO_SVC4 IBM_2145:ITSO_SVC1:admin>lspartnership id name location partnership 000002006BE04FC4 ITSO_SVC1 local 000002006AC03A42 ITSO_SVC2 remote partially_configured_local 0000020060A06FB8 ITSO_SVC3 remote partially_configured_local 0000020061C06FCA ITSO_SVC4 remote partially_configured_local From ITSO_SVC2 to ITSO_SVC1, ITSO_SVC3 and ITSO-SVC4 IBM_2145:ITSO_SVC2:admin>mkpartnership -bandwidth 50 ITSO_SVC1 IBM_2145:ITSO_SVC2:admin>mkpartnership -bandwidth 50 ITSO_SVC3 IBM_2145:ITSO_SVC2:admin>mkpartnership -bandwidth 50 ITSO_SVC4 IBM_2145:ITSO_SVC2:admin>lspartnership id name location partnership 000002006AC03A42 ITSO_SVC2 local 000002006BE04FC4 ITSO_SVC1 remote fully_configured 0000020060A06FB8 ITSO_SVC3 remote partially_configured_local 0000020061C06FCA ITSO_SVC4 remote partially_configured_local From ITSO_SVC3 to ITSO_SVC1, ITSO_SVC3 and ITSO-SVC4 IBM_2145:ITSO_SVC3:admin>mkpartnership -bandwidth 50 ITSO_SVC1 IBM_2145:ITSO_SVC3:admin>mkpartnership -bandwidth 50 ITSO_SVC2 IBM_2145:ITSO_SVC3:admin>mkpartnership -bandwidth 50 ITSO_SVC4 IBM_2145:ITSO_SVC3:admin>lspartnership id name location partnership 0000020060A06FB8 ITSO_SVC3 local 000002006BE04FC4 ITSO_SVC1 remote fully_configured 000002006AC03A42 ITSO_SVC2 remote fully_configured 0000020061C06FCA ITSO_SVC4 remote partially_configured_local From ITSO-SVC4 to ITSO_SVC1, ITSO_SVC2 and ITSO_SVC3 IBM_2145:ITSO_SVC4:admin>mkpartnership -bandwidth 50 IBM_2145:ITSO_SVC4:admin>mkpartnership -bandwidth 50 IBM_2145:ITSO_SVC4:admin>mkpartnership -bandwidth 50 IBM_2145:ITSO_SVC4:admin>lspartnership id name location partnership 0000020061C06FCA ITSO_SVC4 local 000002006BE04FC4 ITSO_SVC1 remote fully_configured 000002006AC03A42 ITSO_SVC2 remote fully_configured 0000020060A06FB8 ITSO_SVC3 remote fully_configured ITSO_SVC1 ITSO_SVC2 ITSO_SVC3 bandwidth 50 50 50

bandwidth 50 50 50

bandwidth 50 50 50

bandwidth 50 50 50

After the SVC partnership has been configured, you can configure any rcrelationship or rcconsistgrp that you need. Make sure that a single volume is only in one relationship.

Daisy-chain configuration
Figure 9-11 on page 577 shows the daisy-chain configuration.

576

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

Figure 9-11 Daisy-chain configuration

Example 9-164 shows the sequence of mkpartnership commands to execute to create a daisy-chain configuration.
Example 9-164 Creating a daisy-chain configuration

From ITSO_SVC1 to ITSO_SVC2 IBM_2145:ITSO_SVC1:admin>mkpartnership -bandwidth 50 ITSO_SVC2 IBM_2145:ITSO_SVC1:admin>lspartnership id name location partnership bandwidth 000002006BE04FC4 ITSO_SVC1 local 000002006AC03A42 ITSO_SVC2 remote partially_configured_local 50 From ITSO_SVC2 to ITSO_SVC1 and ITSO_SVC3 IBM_2145:ITSO_SVC2:admin>mkpartnership -bandwidth 50 ITSO_SVC1 IBM_2145:ITSO_SVC2:admin>mkpartnership -bandwidth 50 ITSO_SVC3 IBM_2145:ITSO_SVC2:admin>lspartnership id name location partnership bandwidth 000002006AC03A42 ITSO_SVC2 local 000002006BE04FC4 ITSO_SVC1 remote fully_configured 50 0000020060A06FB8 ITSO_SVC3 remote partially_configured_local 50 From ITSO_SVC3 to ITSO_SVC2 and ITSO_SVC4 IBM_2145:ITSO_SVC3:admin>mkpartnership -bandwidth 50 ITSO_SVC2 IBM_2145:ITSO_SVC3:admin>mkpartnership -bandwidth 50 ITSO_SVC4 IBM_2145:ITSO_SVC3:admin>lspartnership id name location partnership bandwidth 0000020060A06FB8 ITSO_SVC3 local 000002006AC03A42 ITSO_SVC2 remote fully_configured 50 0000020061C06FCA ITSO_SVC4 remote partially_configured_local 50 From ITSO_SVC4 to ITSO_SVC3 IBM_2145:ITSO_SVC4:admin>mkpartnership -bandwidth 50 ITSO_SVC3 IBM_2145:ITSO_SVC4:admin>lspartnership id name location partnership bandwidth 0000020061C06FCA ITSO_SVC4 local 0000020060A06FB8 ITSO_SVC3 remote fully_configured 50

Chapter 9. SAN Volume Controller operations using the command-line interface

577

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

After the SVC partnership has been configured, you can configure any rcrelationship or rcconsistgrp that you need. Make sure that a single volume is only in one relationship.

578

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

9.14 Global Mirror operation


In the following scenario, we set up an intercluster Global Mirror relationship between the SVC system ITSO_SVC1 at the primary site and the SVC system ITSO_SVC4 at the secondary site. Note: This example is for an intercluster Global Mirror operation only. In case you want to set up an intracluster operation, we highlight those parts in the following procedure that you do not need to perform. Table 9-4 shows the details of the volumes.
Table 9-4 Details of volumes for Global Mirror relationship scenario Content of volume Database files Database log files Application files Volumes at primary site GM_DB_Pri GM_DBLog_Pri GM_App_Pri Volumes at secondary site GM_DB_Sec GM_DBLog_Sec GM_App_Sec

Because data consistency is needed across GM_DB_Pri and GM_DBLog_Pri, we create a Consistency Group to handle Global Mirror relationships for them. Because in this scenario the application files are independent of the database, we create a stand-alone Global Mirror relationship for GM_App_Pri. Figure 9-12 illustrates the Global Mirror relationship setup.

Figure 9-12 Global Mirror scenario

Chapter 9. SAN Volume Controller operations using the command-line interface

579

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

9.14.1 Setting up Global Mirror


In the following section, we assume that the source and target volumes have already been created and that the ISLs and zoning are in place, enabling the SVC systems to communicate. To set up the Global Mirror, perform the following steps: 1. Create an SVC partnership between ITSO_SVC1 and ITSO_SVC4, on both SVC clustered systems: Bandwidth 100 MBps 2. Create a Global Mirror Consistency Group: Name CG_W2K3_GM 3. Create the Global Mirror relationship for GM_DB_Pri: Master GM_DB_Pri Auxiliary GM_DB_Sec Auxiliary SVC system ITSO_SVC4 Name GMREL1 Consistency Group CG_W2K3_GM Master GM_DBLog_Pri Auxiliary GM_DBLog_Sec Auxiliary SVC system ITSO_SVC4 Name GMREL2 Consistency Group CG_W2K3_GM Master GM_App_Pri Auxiliary GM_App_Sec Auxiliary SVC system ITSO_SVC4 Name GMREL3

4. Create the Global Mirror relationship for GM_DBLog_Pri:

5. Create the Global Mirror relationship for GM_App_Pri:

In the following sections, we perform each step by using the CLI.

9.14.2 Creating an SVC partnership between ITSO_SVC1 and ITSO_SVC4


We create an SVC partnership between both clustered systems. Note: If you are creating an intracluster Global Mirror, do not perform the next step; instead, go to 9.14.3, Changing link tolerance and system delay simulation on page 582.

Preverification
To verify that both clustered systems can communicate with each other, use the lspartnership command. Example 9-165 confirms that our clustered systems are communicating, because ITSO_SVC4 is an eligible SVC system candidate at ITSO_SVC1 for the SVC system partnership, and vice versa. Therefore, both systems are communicating with each other.
Example 9-165 Listing the available SVC systems for partnership

IBM_2145:ITSO_SVC1:admin>lspartnershipcandidate id configured name 0000020061C06FCA no ITSO_SVC4

580

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

IBM_2145:ITSO_SVC4:admin>lspartnershipcandidate id configured name 000002006BE04FC4 no ITSO_SVC1 In Example 9-166, we show the output of the lspartnership command before setting up the SVC systems partnership for Global Mirror. We show this output for comparison after we have set up the SVC partnership.
Example 9-166 Pre-verification of system configuration

IBM_2145:ITSO_SVC1:admin>lspartnership id name location partnership bandwidth 000002006BE04FC4 ITSO_SVC1 local IBM_2145:ITSO_SVC4:admin>lspartnership id name location partnership bandwidth 0000020061C06FCA ITSO_SVC4 local

Partnership between systems


In Example 9-167, we create the partnership from ITSO_SVC1 to ITSO_SVC4, specifying a 100 MBps bandwidth to use for the background copy. To verify the status of the newly created partnership, we issue the lspartnership command. Notice that the new partnership is only partially configured. It will remain partially configured until we run the mkpartnership command on the other clustered system.
Example 9-167 Creating the partnership from ITSO_SVC1 to ITSO_SVC4 and verifying the partnership

IBM_2145:ITSO_SVC1:admin>mkpartnership -bandwidth 100 ITSO_SVC4 IBM_2145:ITSO_SVC1:admin>lspartnership id name location partnership bandwidth 000002006BE04FC4 ITSO_SVC1 local 0000020061C06FCA ITSO_SVC4 remote partially_configured_local 100 In Example 9-168, we create the partnership from ITSO_SVC4 back to ITSO_SVC1, specifying a 100 MBps bandwidth to be used for the background copy. After creating the partnership, verify that the partnership is fully configured by reissuing the lspartnership command.
Example 9-168 Creating the partnership from ITSO_SVC4 to ITSO_SVC1 and verifying the partnership

IBM_2145:ITSO_SVC4:admin>mkpartnership -bandwidth 100 ITSO_SVC1 IBM_2145:ITSO_SVC4:admin>lspartnership id name location partnership bandwidth 0000020061C06FCA ITSO_SVC4 local 000002006BE04FC4 ITSO_SVC1 remote fully_configured 100 IBM_2145:ITSO_SVC1:admin>lspartnership id name location partnership bandwidth 000002006BE04FC4 ITSO_SVC1 local 0000020061C06FCA ITSO_SVC4 remote fully_configured 100

Chapter 9. SAN Volume Controller operations using the command-line interface

581

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

9.14.3 Changing link tolerance and system delay simulation


The gm_link_tolerance defines the sensitivity of the SVC to inter-link overload conditions. The value is the number of seconds of continuous link difficulties that will be tolerated before the SVC will stop the remote copy relationships to prevent affecting host I/O at the primary site. To change the value, use the following command: chsystem -gmlinktolerance link_tolerance The link_tolerance value is between 60 and 86,400 seconds in increments of 10 seconds. The default value for the link tolerance is 300 seconds. A value of 0 disables link tolerance. Important: We strongly suggest that you use the default value. If the link is overloaded for a period, which affects host I/O at the primary site, the relationships will be stopped to protect those hosts.

Intercluster and intracluster delay simulation


This Global Mirror feature permits a simulation of a delayed write to a remote volume. This feature allows testing to be performed that detects colliding writes, and you can use this feature to test an application before the full deployment of the Global Mirror feature. The delay simulation can be enabled separately for each intracluster or intercluster Global Mirror. To enable this feature, run the following command either for the intracluster or intercluster simulation: For intercluster: chsystem -gminterdelaysimulation <inter_cluster_delay_simulation> For intracluster: chsystem -gmintradelaysimulation <intra_cluster_delay_simulation> The inter_cluster_delay_simulation and intra_cluster_delay_simulation values express the amount of time (in milliseconds) secondary I/Os are delayed respectively for intercluster and intracluster relationships. These values specify the number of milliseconds that I/O activity (that is, copying a primary volume to a secondary volume) is delayed. You can set a value from 0 to 100 milliseconds in 1 millisecond increments for the cluster_delay_simulation in the previous commands. A value of zero (0) disables the feature. To check the current settings for the delay simulation, use the following command: lssystem In Example 9-169, we show the modification of the delay simulation value and a change of the Global Mirror link tolerance parameters. We also show the changed values of the Global Mirror link tolerance and delay simulation parameters.
Example 9-169 Delay simulation and link tolerance modification

IBM_2145:ITSO_SVC1:admin>chsystem -gminterdelaysimulation 20 IBM_2145:ITSO_SVC1:admin>chsystem -gmintradelaysimulation 40 IBM_2145:ITSO_SVC1:admin>chsystem -gmlinktolerance 200 IBM_2145:ITSO_SVC1:admin>lssystem id 000002006BE04FC4 name ITSO_SVC1 location local partnership bandwidth total_mdisk_capacity 866.5GB

582

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

space_in_mdisk_grps 766.5GB space_allocated_to_vdisks 30.00GB total_free_space 836.5GB total_vdiskcopy_capacity 30.00GB total_used_capacity 30.00GB total_overallocation 3 total_vdisk_capacity 30.00GB total_allocated_extent_capacity 31.50GB statistics_status on statistics_frequency 15 cluster_locale en_US time_zone 520 US/Pacific code_level 6.3.0.0 (build 54.0.1109090000) console_IP 10.18.228.81:443 id_alias 000002006BE04FC4 gm_link_tolerance 200 gm_inter_cluster_delay_simulation 20 gm_intra_cluster_delay_simulation 40 gm_max_host_delay 5 email_reply email_contact email_contact_primary email_contact_alternate email_contact_location email_contact2 email_contact2_primary email_contact2_alternate email_state stopped inventory_mail_interval 0 cluster_ntp_IP_address cluster_isns_IP_address iscsi_auth_method chap iscsi_chap_secret passw0rd auth_service_configured no auth_service_enabled no auth_service_url auth_service_user_name auth_service_pwd_set no auth_service_cert_set no auth_service_type tip relationship_bandwidth_limit 25 tier generic_ssd tier_capacity 0.00MB tier_free_capacity 0.00MB tier generic_hdd tier_capacity 766.50GB tier_free_capacity 736.50GB has_nas_key no layer appliance

Chapter 9. SAN Volume Controller operations using the command-line interface

583

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

9.14.4 Creating a Global Mirror Consistency Group


In Example 9-170, we create the Global Mirror Consistency Group using the mkrcconsistgrp command. We will use this Consistency Group for the Global Mirror relationships for the database volumes. The Consistency Group is named CG_W2K3_GM.
Example 9-170 Creating the Global Mirror Consistency Group CG_W2K3_GM

IBM_2145:ITSO_SVC1:admin>mkrcconsistgrp -cluster ITSO_SVC4 -name CG_W2K3_GM RC Consistency Group, id [0], successfully created IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp id name master_cluster_id master_cluster_name aux_cluster_id aux_cluster_name primary state relationship_count copy_type cycling_mode 0 CG_W2K3_GM 000002006BE04FC4 ITSO_SVC1 0000020061C06FCA ITSO_SVC4 empty 0 empty_group none

9.14.5 Creating Global Mirror relationships


In Example 9-172 on page 585, we create the GMREL1 and GMREL2 Global Mirror relationships for the GM_DB_Pri and GM_DBLog_Pri volumes. We also make them members of the CG_W2K3_GM Global Mirror Consistency Group. We use the lsvdisk command to list all of the volumes in the ITSO_SVC1 system and, then, use the lsrcrelationshipcandidate command to show the possible volumes candidates for GM_DB_Pri in ITSO_SVC4. After checking all of these conditions, we use the mkrcrelationship command to create the Global Mirror relationship. To verify the newly created Global Mirror relationships, we list them with the lsrcrelationship command.
Example 9-171 Creating GMREL1 and GMREL2 Global Mirror relationships IBM_2145:ITSO_SVC1:admin>lsvdisk -filtervalue name=GM* id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state se_copy_count RC_change 0 GM_DB_Pri 0 io_grp0 online 0 Pool_DS3500-1 10.00GB striped 6005076801AF813F1000000000000031 0 1 empty 0 0 no 1 GM_DBLog_Pri 0 io_grp0 online 0 Pool_DS3500-1 10.00GB striped 6005076801AF813F1000000000000032 0 1 empty 0 0 no 2 GM_App_Pri 0 io_grp0 online 0 Pool_DS3500-1 10.00GB striped 6005076801AF813F1000000000000033 0 1 empty 0 0 no IBM_2145:ITSO_SVC1:admin>lsrcrelationshipcandidate -aux ITSO_SVC4 -master GM_DB_Pri id vdisk_name 0 GM_DB_Sec 1 GM_DBLog_Sec 2 GM_App_Sec IBM_2145:ITSO_SVC1:admin>mkrcrelationship -master GM_DB_Pri -aux GM_DB_Sec -cluster ITSO_SVC4 -consistgrp CG_W2K3_GM -name GMREL1 -global RC Relationship, id [0], successfully created

584

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

IBM_2145:ITSO_SVC1:admin>mkrcrelationship -master GM_DBLog_Pri -aux GM_DBLog_Sec -cluster ITSO_SVC4 -consistgrp CG_W2K3_GM -name GMREL2 -global RC Relationship, id [1], successfully created IBM_2145:ITSO_SVC1:admin>mkrcrelationship -master GM_DB_Pri -aux GM_DB_Sec -cluster ITSO_SVC4 -consistgrp CG_W2K3_GM -name GMREL1 -global RC Relationship, id [2], successfully created IBM_2145:ITSO_SVC1:admin>mkrcrelationship -master GM_DBLog_Pri -aux GM_DBLog_Sec -cluster ITSO_SVC4 -consistgrp CG_W2K3_GM -name GMREL2 -global RC Relationship, id [3], successfully created IBM_2145:ITSO_SVC1:admin>lsrcrelationship id name master_cluster_id master_cluster_name master_vdisk_id master_vdisk_name aux_cluster_id aux_cluster_name aux_vdisk_id aux_vdisk_name primary consistency_group_id consistency_group_name state bg_copy_priority progress copy_type cycling_mode 0 GMREL1 000002006BE04FC4 ITSO_SVC1 0 GM_DB_Pri 0000020061C06FCA ITSO_SVC4 0 GM_DB_Sec master 0 CG_W2K3_GM inconsistent_stopped 50 0 global none 1 GMREL2 000002006BE04FC4 ITSO_SVC1 1 GM_DBLog_Pri 0000020061C06FCA ITSO_SVC4 1 GM_DBLog_Sec master 0 CG_W2K3_GM inconsistent_stopped 50 0 global none

9.14.6 Creating the stand-alone Global Mirror relationship for GM_App_Pri


In Example 9-172, we create the stand-alone Global Mirror relationship GMREL3 for GM_App_Pri. After it is created, we will check the status of each of our Global Mirror relationships. Notice that the status of GMREL3 is consistent_stopped, because it was created with the -sync option. The -sync option indicates that the secondary (auxiliary) volume is already synchronized with the primary (master) volume. The initial background synchronization is skipped when this option is used. GMREL1 and GMREL2 are in the inconsistent_stopped state, because they were not created with the -sync option, so their auxiliary volumes need to be synchronized with their primary volumes.
Example 9-172 Creating a stand-alone Global Mirror relationship and verifying it IBM_2145:ITSO_SVC1:admin>mkrcrelationship -master GM_App_Pri -aux GM_App_Sec -cluster ITSO_SVC4 -sync -name GMREL3 -global RC Relationship, id [2], successfully created IBM_2145:ITSO_SVC1:admin>lsrcrelationship -delim : id:name:master_cluster_id:master_cluster_name:master_vdisk_id:master_vdisk_name:aux_cluster_id:aux_cluster_ name:aux_vdisk_id:aux_vdisk_name:primary:consistency_group_id:consistency_group_name:state:bg_copy_priority :progress:copy_type:cycling_mode 0:GMREL1:000002006BE04FC4:ITSO_SVC1:0:GM_DB_Pri:0000020061C06FCA:ITSO_SVC4:0:GM_DB_Sec:master:0:CG_W2K3_GM: inconsistent_copying:50:73:global:none 1:GMREL2:000002006BE04FC4:ITSO_SVC1:1:GM_DBLog_Pri:0000020061C06FCA:ITSO_SVC4:1:GM_DBLog_Sec:master:0:CG_W2 K3_GM:inconsistent_copying:50:75:global:none 2:GMREL3:000002006BE04FC4:ITSO_SVC1:2:GM_App_Pri:0000020061C06FCA:ITSO_SVC4:2:GM_App_Sec:master:::consisten t_stopped:50:100:global:none

9.14.7 Starting Global Mirror


Now that we have created the Global Mirror Consistency Group and relationships, we are ready to use the Global Mirror relationships in our environment.

Chapter 9. SAN Volume Controller operations using the command-line interface

585

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

When implementing Global Mirror, the goal is to reach a consistent and synchronized state that can provide redundancy in case a hardware failure occurs that affects the SAN at the production site. In this section, we show how to start the stand-alone Global Mirror relationships and the Consistency Group.

9.14.8 Starting a stand-alone Global Mirror relationship


In Example 9-149 on page 563, we start the stand-alone Global Mirror relationship named GMREL3. Because the Global Mirror relationship was in the Consistent stopped state and no updates have been made to the primary volume, the relationship quickly enters the Consistent synchronized state.
Example 9-173 Starting the stand-alone Global Mirror relationship

IBM_2145:ITSO_SVC1:admin>startrcrelationship GMREL3 IBM_2145:ITSO_SVC1:admin>lsrcrelationship GMREL3 id 2 name GMREL3 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 2 master_vdisk_name GM_App_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 2 aux_vdisk_name GM_App_Sec primary master consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type global cycle_period_seconds 300 cycling_mode none master_change_vdisk_id master_change_vdisk_name aux_change_vdisk_id aux_change_vdisk_name

9.14.9 Starting a Global Mirror Consistency Group


In Example 9-150 on page 564, we start the CG_W2K3_GM Global Mirror Consistency Group. Because the Consistency Group was in the Inconsistent stopped state, it enters the Inconsistent copying state until the background copy has completed for all of the relationships that are in the Consistency Group. Upon completion of the background copy, the CG_W2K3_GM Global Mirror Consistency Group enters the Consistent synchronized state (see Example 9-174 on page 587).

586

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

Example 9-174 Starting the Global Mirror Consistency Group

IBM_2145:ITSO_SVC1:admin>startrcconsistgrp CG_W2K3_GM IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp 0 id 0 name CG_W2K3_GM master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 primary master state inconsistent_copying relationship_count 2 freeze_time status sync copy_type global cycle_period_seconds 300 cycling_mode none RC_rel_id 0 RC_rel_name GMREL1 RC_rel_id 1 RC_rel_name GMREL2

9.14.10 Monitoring background copy progress


To monitor the background copy progress, use the lsrcrelationship command. This command shows us all of the defined Global Mirror relationships if it is used without any parameters. In the command output, progress indicates the current background copy progress. Example 9-151 on page 564 shows our Global Mirror relationships. Using SNMP traps: Setting up SNMP traps for the SVC enables automatic notification when Global Mirror Consistency Groups or relationships change state.
Example 9-175 Monitoring background copy progress example

IBM_2145:ITSO_SVC1:admin>lsrcrelationship GMREL1 id 0 name GMREL1 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 0 master_vdisk_name GM_DB_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 0 aux_vdisk_name GM_DB_Sec primary master consistency_group_id 0 consistency_group_name CG_W2K3_GM state inconsistent_copying bg_copy_priority 50 progress 38 freeze_time status online
Chapter 9. SAN Volume Controller operations using the command-line interface

587

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

sync copy_type global cycle_period_seconds 300 cycling_mode none master_change_vdisk_id master_change_vdisk_name aux_change_vdisk_id aux_change_vdisk_name IBM_2145:ITSO_SVC1:admin>lsrcrelationship GMREL2 id 1 name GMREL2 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 1 master_vdisk_name GM_DBLog_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 1 aux_vdisk_name GM_DBLog_Sec primary master consistency_group_id 0 consistency_group_name CG_W2K3_GM state inconsistent_copying bg_copy_priority 50 progress 76 freeze_time status online sync copy_type global cycle_period_seconds 300 cycling_mode none master_change_vdisk_id master_change_vdisk_name aux_change_vdisk_id aux_change_vdisk_name When all of the Global Mirror relationships complete the background copy, the Consistency Group enters the Consistent synchronized state, as shown in Example 9-152 on page 565.
Example 9-176 Listing the Global Mirror Consistency Group

IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 primary master state consistent_synchronized relationship_count 2 freeze_time status sync copy_type global 588
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

cycle_period_seconds 300 cycling_mode none RC_rel_id 0 RC_rel_name GMREL1 RC_rel_id 1 RC_rel_name GMREL2

9.14.11 Stopping and restarting Global Mirror


Now that the Global Mirror Consistency Group and relationships are running, we describe how to stop, restart, and change the direction of the stand-alone Global Mirror relationships and the Consistency Group. First, we show how to stop and restart the stand-alone Global Mirror relationships and the Consistency Group.

9.14.12 Stopping a stand-alone Global Mirror relationship


In Example 9-153 on page 566, we stop the stand-alone Global Mirror relationship while enabling access (write I/O) to both the primary and the secondary volume. As a result, the relationship enters the Idling state.
Example 9-177 Stopping the stand-alone Global Mirror relationship

IBM_2145:ITSO_SVC1:admin>stoprcrelationship -access GMREL3 IBM_2145:ITSO_SVC1:admin>lsrcrelationship GMREL3 id 2 name GMREL3 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 2 master_vdisk_name GM_App_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 2 aux_vdisk_name GM_App_Sec primary consistency_group_id consistency_group_name state idling bg_copy_priority 50 progress freeze_time status sync in_sync copy_type global cycle_period_seconds 300 cycling_mode none master_change_vdisk_id master_change_vdisk_name aux_change_vdisk_id aux_change_vdisk_name

Chapter 9. SAN Volume Controller operations using the command-line interface

589

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

9.14.13 Stopping a Global Mirror Consistency Group


In Example 9-154 on page 566, we stop the Global Mirror Consistency Group without specifying the -access parameter. Therefore, the Consistency Group enters the Consistent stopped state.
Example 9-178 Stopping a Global Mirror Consistency Group without specifying -access

IBM_2145:ITSO_SVC1:admin>stoprcconsistgrp CG_W2K3_GM IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 primary master state consistent_stopped relationship_count 2 freeze_time status sync in_sync copy_type global cycle_period_seconds 300 cycling_mode none RC_rel_id 0 RC_rel_name GMREL1 RC_rel_id 1 RC_rel_name GMREL2 If, afterwards, we want to enable access (write I/O) for the secondary volume, we can reissue the stoprcconsistgrp command specifying the -access parameter. The Consistency Group transits to the Idling state, as shown in Example 9-179.
Example 9-179 Stopping a Global Mirror Consistency Group

IBM_2145:ITSO_SVC1:admin>stoprcconsistgrp -access CG_W2K3_GM IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 primary state idling relationship_count 2 freeze_time status sync in_sync copy_type global cycle_period_seconds 300 cycling_mode none RC_rel_id 0 RC_rel_name GMREL1 RC_rel_id 1

590

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

RC_rel_name GMREL2

9.14.14 Restarting a Global Mirror relationship in the Idling state


When restarting a Global Mirror relationship in the Idling state, we must specify the copy direction. If any updates have been performed on either the master or the auxiliary volume, consistency will be compromised. Therefore, we must issue the -force parameter to restart the relationship. If the -force parameter is not used the command will fail, as shown in Example 9-180.
Example 9-180 Restarting a Global Mirror relationship after updates in the Idling state

IBM_2145:ITSO_SVC1:admin>startrcrelationship -primary master -force GMREL3 IBM_2145:ITSO_SVC1:admin>lsrcrelationship GMREL3 id 2 name GMREL3 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 2 master_vdisk_name GM_App_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 2 aux_vdisk_name GM_App_Sec primary master consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type global cycle_period_seconds 300 cycling_mode none master_change_vdisk_id master_change_vdisk_name aux_change_vdisk_id aux_change_vdisk_name

9.14.15 Restarting a Global Mirror Consistency Group in the Idling state


When restarting a Global Mirror Consistency Group in the Idling state, we must specify the copy direction. If any updates have been performed on either the master or the auxiliary volume in any of the Global Mirror relationships in the Consistency Group, consistency will be compromised. Therefore, we must issue the -force parameter to start the relationship. If the -force parameter is not used, the command will fail. In Example 9-181 on page 592, we restart the Consistency Group and change the copy direction by specifying the auxiliary volumes to become the primaries.
Chapter 9. SAN Volume Controller operations using the command-line interface

591

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

Example 9-181 Restarting a Global Mirror relationship while changing the copy direction

IBM_2145:ITSO_SVC1:admin>startrcconsistgrp -primary aux CG_W2K3_GM IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 primary aux state consistent_synchronized relationship_count 2 freeze_time status sync copy_type global cycle_period_seconds 300 cycling_mode none RC_rel_id 0 RC_rel_name GMREL1 RC_rel_id 1 RC_rel_name GMREL2

9.14.16 Changing direction for Global Mirror


In this section we show how to change the copy direction of the stand-alone Global Mirror relationships and the Consistency Group.

9.14.17 Switching copy direction for a Global Mirror relationship


When a Global Mirror relationship is in the Consistent synchronized state, we can change the copy direction for the relationship by using the switchrcrelationship command and specifying the primary volume. If the volume that is specified as the primary when issuing this command is already a primary, the command has no effect. In Example 9-182, we change the copy direction for the stand-alone Global Mirror relationship, specifying the auxiliary volume to become the primary. Important: When the copy direction is switched, it is crucial that there is no outstanding I/O to the volume that transits from primary to secondary, because all I/O will be inhibited to that volume when it becomes the secondary. Therefore, careful planning is required prior to using the switchrcrelationship command.
Example 9-182 Switching the copy direction for a Global Mirror relationship

IBM_2145:ITSO_SVC1:admin>lsrcrelationship GMREL3 id 2 name GMREL3 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 2 master_vdisk_name GM_App_Pri 592
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 2 aux_vdisk_name GM_App_Sec primary master consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type global cycle_period_seconds 300 cycling_mode none master_change_vdisk_id master_change_vdisk_name aux_change_vdisk_id aux_change_vdisk_name IBM_2145:ITSO_SVC1:admin>switchrcrelationship -primary aux GMREL3 IBM_2145:ITSO_SVC1:admin>lsrcrelationship GMREL3 id 2 name GMREL3 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 2 master_vdisk_name GM_App_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 2 aux_vdisk_name GM_App_Sec primary aux consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type global cycle_period_seconds 300 cycling_mode none master_change_vdisk_id master_change_vdisk_name aux_change_vdisk_id aux_change_vdisk_name

9.14.18 Switching copy direction for a Global Mirror Consistency Group


When a Global Mirror Consistency Group is in the Consistent synchronized state, we can change the copy direction for the relationship by using the switchrcconsistgrp command

Chapter 9. SAN Volume Controller operations using the command-line interface

593

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

and specifying the primary volume. If the volume that is specified as the primary when issuing this command is already a primary, the command has no effect. In Example 9-183, we change the copy direction for the Global Mirror Consistency Group, specifying the auxiliary to become the primary. Important: When the copy direction is switched, it is crucial that there is no outstanding I/O to the volume that transits from primary to secondary, because all I/O will be inhibited when that volume becomes the secondary. Therefore, careful planning is required prior to using the switchrcconsistgrp command.
Example 9-183 Switching the copy direction for a Global Mirror Consistency Group

IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 primary master state consistent_synchronized relationship_count 2 freeze_time status sync copy_type global cycle_period_seconds 300 cycling_mode none RC_rel_id 0 RC_rel_name GMREL1 RC_rel_id 1 RC_rel_name GMREL2 IBM_2145:ITSO_SVC1:admin>switchrcconsistgrp -primary aux CG_W2K3_GM IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 primary aux state consistent_synchronized relationship_count 2 freeze_time status sync copy_type global cycle_period_seconds 300 cycling_mode none RC_rel_id 0 RC_rel_name GMREL1 RC_rel_id 1 RC_rel_name GMREL2

594

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

9.14.19 Changing a GM relationship to cycling mode


Starting with SVC 6.3 Global Mirror can operate with or without cycling. When operating without cycling, write operations are applied to the secondary volume as soon as possible after they are applied to the primary volume. The secondary volume is generally less than 1 second behind the primary volume, which minimizes the amount of data that must be recovered in the event of a failover. However, this requires that a high-bandwidth link be provisioned between the two sites. When Global Mirror operates in cycling mode, changes are tracked and where needed copied to intermediate change volumes. Changes are transmitted to the secondary site periodically. The secondary volumes are much further behind the primary volume, and more data must be recovered in the event of a failover. Because the data transfer can be smoothed over a longer time period, however, lower bandwidth is required to provide an effective solution. A Global Mirror relationship consist of two volumes, primary and secondary. With SVC 6.3 each of these may be associated to a change volume. Change volumes are used to record changes to the remote copy volume. A FlashCopy relationship exists between the remote copy volume and the change volume. This relationship cannot be manipulated as a normal FlashCopy relationship, and most commands will fail by design as this is an internal relationship. Cycling mode transmits a series of FlashCopy images from the primary to the secondary, and it is enabled using svctask chrcrelationship cycling=multi. The primary change volume stores changes to be sent to the secondary volume, the secondary change volume is used to maintain a consistent image at the secondary volume. Every x seconds, the primary FlashCopy mapping is started automatically, where x is the cycling period and is configurable. Data is then copied to the secondary volume from the primary change volume, and the secondary FlashCopy mapping is started if resynchronization is needed, and this means there is always a consistent copy at the secondary volume. The cycling period is configurable and the default value is 300 sec. The RPO depends on how long the FlashCopy takes to complete. If the FlashCopy completes within the cycling time, maximum RPO = 2*cycling time, otherwise, RPO = 2*copy completion time. The current RPO can be estimated using the new Freeze time rcrelationship property. It is the time of the last consistent image that is present at the secondary. Figure 9-13 on page 596 shows the cycling mode with change volumes.

Change volume requirements


May be a thin-provision volume Must be the same size Must be in the same I/O Group May not be used for users remote copy or FlashCopy mappings Must have one for both primary and secondary volumes Cannot manipulate it like a normal FlashCopy mapping In this section we show how to change the cycling mode of the stand-alone Global Mirror relationships (GMREL3) and Consistency Group CG_W2K3_GM (GMREL1 and GMREL2).

Chapter 9. SAN Volume Controller operations using the command-line interface

595

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 9-13 Global Mirror with change volumes

We assume that the source and target volumes have already been created and that the ISLs and zoning are in place, enabling the SVC systems to communicate. We also assume that the Global Mirror relationship has been already established. To change the Global Mirror to cycling mode with change volumes, perform the following steps: Create thin provisioned change volumes for primary and secondary volumes both sites Stop standalone relationship GMREL3 to change the cycling mode primary site Set cycling mode on standalone relationship GMREL3 primary site Set change volume on master volume relationship GMREL3 primary site Set change volume on auxiliary volume relationship GMREL3 secondary site Start standalone relationship GMREL3 in cycling mode primary site Stop Consistency Group CG_W2K3_GM to change the cycling mode primary site Set cycling mode on Consistency Group primary site Set change volume on master volume relationship GMREL1 of Consistency Group CG_W2K3_GM primary site Set change volume on auxiliary volume relationship GMREL1 secondary site Set change volume on master volume relationship GMREL2 of Consistency Group CG_W2K3_GM primary site Set change volume on auxiliary volume relationship GMREL2 secondary site Start Consistency Group CG_W2K3_GM in cycling mode primary site

9.14.20 Create thin provisioned change volumes


We start the setup creating thin-provisioned volumes for primary and secondary sites as shown in Example 9-184.
Example 9-184 Creating thin provisioning volumes for Global Mirror cycling mode

IBM_2145:ITSO_SVC1:admin>mkvdisk -iogrp 0 -mdiskgrp 0 -size 10 -unit gb -rsize 20% -autoexpand -grainsize 32 -name GM_DB_Pri_CHANGE_VOL Virtual Disk, id [3], successfully created IBM_2145:ITSO_SVC1:admin>mkvdisk -iogrp 0 -mdiskgrp 0 -size 10 -unit gb -rsize 20% -autoexpand -grainsize 32 -name GM_DBLog_Pri_CHANGE_VOL 596
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

Virtual Disk, id [4], successfully created IBM_2145:ITSO_SVC1:admin>mkvdisk -iogrp 0 -mdiskgrp 0 -size 10 -unit gb -rsize 20% -autoexpand -grainsize 32 -name GM_App_Pri_CHANGE_VOL Virtual Disk, id [5], successfully created IBM_2145:ITSO_SVC4:admin>mkvdisk -iogrp 0 -mdiskgrp 0 -size 10 -unit gb -rsize 20% -autoexpand -grainsize 32 -name GM_DB_Sec_CHANGE_VOL Virtual Disk, id [3], successfully created IBM_2145:ITSO_SVC4:admin>mkvdisk -iogrp 0 -mdiskgrp 0 -size 10 -unit gb -rsize 20% -autoexpand -grainsize 32 -name GM_DBLog_Sec_CHANGE_VOL Virtual Disk, id [4], successfully created IBM_2145:ITSO_SVC4:admin>mkvdisk -iogrp 0 -mdiskgrp 0 -size 10 -unit gb -rsize 20% -autoexpand -grainsize 32 -name GM_App_Sec_CHANGE_VOL Virtual Disk, id [5], successfully created

9.14.21 Stop standalone remote copy relationship


We now display the remote copy relationships to be sure they are in-sync and then we stop the standalone relationship GMREL3 as shown in Example 9-185.
Example 9-185 Stop remote copy standalone relationship

IBM_2145:ITSO_SVC1:admin>lsrcrelationship id name master_cluster_id master_cluster_name master_vdisk_id master_vdisk_name aux_cluster_id aux_cluster_name aux_vdisk_id aux_vdisk_name primary consistency_group_id consistency_group_name state bg_copy_priority progress copy_type cycling_mode 0 GMREL1 000002006BE04FC4 ITSO_SVC1 0 GM_DB_Pri 0000020061C06FCA ITSO_SVC4 0 GM_DB_Sec aux 0 CG_W2K3_GM consistent_synchronized 50 global none 1 GMREL2 000002006BE04FC4 ITSO_SVC1 1 GM_DBLog_Pri 0000020061C06FCA ITSO_SVC4 1 GM_DBLog_Sec aux 0 CG_W2K3_GM consistent_synchronized 50 global none 2 GMREL3 000002006BE04FC4 ITSO_SVC1 2 GM_App_Pri 0000020061C06FCA ITSO_SVC4 2 GM_App_Sec aux consistent_synchronized 50 global none IBM_2145:ITSO_SVC1:admin>stoprcrelationship GMREL3

9.14.22 Set cycling mode on standalone remote copy relationship


In Example 9-186 we set cycling mode on the relationship using the chrcrelatioship command. Note that cyclingmode and masterchange parameters cannot be entered in the same command.
Example 9-186 Set cycling mode

IBM_2145:ITSO_SVC1:admin>chrcrelationship -cyclingmode multi

GMREL3

Chapter 9. SAN Volume Controller operations using the command-line interface

597

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

9.14.23 Set change volume on master volume


In Example 9-187 we set the change volume for the primary volume. A display shows the name of the master change volume.
Example 9-187 Set change volume

IBM_2145:ITSO_SVC1:admin>chrcrelationship -masterchange GM_App_Pri_CHANGE_VOL IBM_2145:ITSO_SVC1:admin>lsrcrelationship GMREL3 id 2 name GMREL3 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 2 master_vdisk_name GM_App_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 2 aux_vdisk_name GM_App_Sec primary aux consistency_group_id consistency_group_name state consistent_stopped bg_copy_priority 50 progress 100 freeze_time status online sync in_sync copy_type global cycle_period_seconds 300 cycling_mode multi master_change_vdisk_id 5 master_change_vdisk_name GM_App_Pri_CHANGE_VOL aux_change_vdisk_id aux_change_vdisk_name

9.14.24 Set change volume on auxiliary volume


In Example 9-188 we set the change volume on the auxiliary volume in the secondary site. From the display we can see the name of the volume.
Example 9-188 Set change volume on auxiliary volume

IBM_2145:ITSO_SVC4:admin>chrcrelationship -auxchange GM_App_Sec_CHANGE_VOL 2 IBM_2145:ITSO_SVC4:admin> IBM_2145:ITSO_SVC4:admin>lsrcrelationship GMREL3 id 2 name GMREL3 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 2 master_vdisk_name GM_App_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 2

598

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

aux_vdisk_name GM_App_Sec primary aux consistency_group_id consistency_group_name state consistent_stopped bg_copy_priority 50 progress 100 freeze_time status online sync in_sync copy_type global cycle_period_seconds 300 cycling_mode multi master_change_vdisk_id 5 master_change_vdisk_name GM_App_Pri_CHANGE_VOL aux_change_vdisk_id 5 aux_change_vdisk_name GM_App_Sec_CHANGE_VOL

9.14.25 Start standalone relationship in cycling mode


In Example 9-189 we start the standalone relationship GMREL3, and after a few minutes you can check the freeze_time parameter to see how it changes.
Example 9-189 Start standalone relationship in cycling mode

IBM_2145:ITSO_SVC1:admin>startrcrelationship GMREL3 IBM_2145:ITSO_SVC1:admin>lsrcrelationship GMREL3 id 2 name GMREL3 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 2 master_vdisk_name GM_App_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 2 aux_vdisk_name GM_App_Sec primary aux consistency_group_id consistency_group_name state consistent_copying bg_copy_priority 50 progress 100 freeze_time 2011/10/04/20/37/20 status online sync copy_type global cycle_period_seconds 300 cycling_mode multi master_change_vdisk_id 5 master_change_vdisk_name GM_App_Pri_CHANGE_VOL aux_change_vdisk_id 5 aux_change_vdisk_name GM_App_Sec_CHANGE_VOL

Chapter 9. SAN Volume Controller operations using the command-line interface

599

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

IBM_2145:ITSO_SVC1:admin>lsrcrelationship GMREL3 id 2 name GMREL3 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 2 master_vdisk_name GM_App_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 2 aux_vdisk_name GM_App_Sec primary aux consistency_group_id consistency_group_name state consistent_copying bg_copy_priority 50 progress 100 freeze_time 2011/10/04/20/42/25 status online sync copy_type global cycle_period_seconds 300 cycling_mode multi master_change_vdisk_id 5 master_change_vdisk_name GM_App_Pri_CHANGE_VOL aux_change_vdisk_id 5 aux_change_vdisk_name GM_App_Sec_CHANGE_VOL

9.14.26 Stop Consistency Group to change the cycling mode


In Example 9-190, we stop the Consistency Group with two relationships, and you must stop it to change Global Mirror to cycling mode. A display shows the state of the Consistency Group changes to consistent_stopped.
Example 9-190 Stop Consistency Group to change the cycling mode

IBM_2145:ITSO_SVC1:admin>stoprcconsistgrp CG_W2K3_GM IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 primary aux state consistent_stopped relationship_count 2 freeze_time status sync in_sync copy_type global cycle_period_seconds 300 cycling_mode none RC_rel_id 0

600

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

RC_rel_name GMREL1 RC_rel_id 1 RC_rel_name GMREL2

9.14.27 Set cycling mode on Consistency Group


In Example 9-191 we change cycling mode of Consistency Group CG_W2K3_GM, and remember that to change it we need to stop the Consistency Group otherwise the command will fail.
Example 9-191 Set Global Mirror cycling mode on Consistency Group

IBM_2145:ITSO_SVC1:admin>chrcconsistgrp -cyclingmode multi CG_W2K3_GM IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 primary aux state consistent_stopped relationship_count 2 freeze_time status sync in_sync copy_type global cycle_period_seconds 300 cycling_mode multi RC_rel_id 0 RC_rel_name GMREL1 RC_rel_id 1 RC_rel_name GMREL2

9.14.28 Set change volume on master volume relationships of the Consistency Group
In Example 9-192 we change both relationships of the Consistency Group to add the change volume on primary volumes. A display shows the name of the master change volumes.
Example 9-192 Set change volume on master volume

IBM_2145:ITSO_SVC1:admin>chrcrelationship -masterchange GM_DB_Pri_CHANGE_VOL GMREL1 IBM_2145:ITSO_SVC1:admin>lsrcrelationship GMREL1 id 0 name GMREL1 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 0 master_vdisk_name GM_DB_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4

Chapter 9. SAN Volume Controller operations using the command-line interface

601

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

aux_vdisk_id 0 aux_vdisk_name GM_DB_Sec primary aux consistency_group_id 0 consistency_group_name CG_W2K3_GM state consistent_stopped bg_copy_priority 50 progress 100 freeze_time status online sync in_sync copy_type global cycle_period_seconds 300 cycling_mode multi master_change_vdisk_id 3 master_change_vdisk_name GM_DB_Pri_CHANGE_VOL aux_change_vdisk_id aux_change_vdisk_name IBM_2145:ITSO_SVC1:admin> IBM_2145:ITSO_SVC1:admin>chrcrelationship -masterchange GM_DBLog_Pri_CHANGE_VOL GMREL2 IBM_2145:ITSO_SVC1:admin>lsrcrelationship GMREL2 id 1 name GMREL2 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 1 master_vdisk_name GM_DBLog_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 1 aux_vdisk_name GM_DBLog_Sec primary aux consistency_group_id 0 consistency_group_name CG_W2K3_GM state consistent_stopped bg_copy_priority 50 progress 100 freeze_time status online sync in_sync copy_type global cycle_period_seconds 300 cycling_mode multi master_change_vdisk_id 4 master_change_vdisk_name GM_DBLog_Pri_CHANGE_VOL aux_change_vdisk_id aux_change_vdisk_name

602

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

9.14.29 Set change volume on auxiliary volumes


In Example 9-193 we change both relationships od Consistency Group to add the change volumes to the secondary volumes. Display shows the name of auxiliary change volumes.
Example 9-193 Set change volume on auxiliary volume

IBM_2145:ITSO_SVC4:admin>chrcrelationship -auxchange GM_DB_Sec_CHANGE_VOL GMREL1 IBM_2145:ITSO_SVC4:admin>lsrcrelationship GMREL1 id 0 name GMREL1 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 0 master_vdisk_name GM_DB_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 0 aux_vdisk_name GM_DB_Sec primary aux consistency_group_id 0 consistency_group_name CG_W2K3_GM state consistent_stopped bg_copy_priority 50 progress 100 freeze_time status online sync in_sync copy_type global cycle_period_seconds 300 cycling_mode multi master_change_vdisk_id 3 master_change_vdisk_name GM_DB_Pri_CHANGE_VOL aux_change_vdisk_id 3 aux_change_vdisk_name GM_DB_Sec_CHANGE_VOL IBM_2145:ITSO_SVC4:admin>chrcrelationship -auxchange GM_DBLog_Sec_CHANGE_VOL GMREL2 IBM_2145:ITSO_SVC4:admin>lsrcrelationship GMREL2 id 1 name GMREL2 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 1 master_vdisk_name GM_DBLog_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 1 aux_vdisk_name GM_DBLog_Sec primary aux consistency_group_id 0 consistency_group_name CG_W2K3_GM state consistent_stopped bg_copy_priority 50 progress 100 freeze_time

Chapter 9. SAN Volume Controller operations using the command-line interface

603

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

status online sync in_sync copy_type global cycle_period_seconds 300 cycling_mode multi master_change_vdisk_id 4 master_change_vdisk_name GM_DBLog_Pri_CHANGE_VOL aux_change_vdisk_id 4 aux_change_vdisk_name GM_DBLog_Sec_CHANGE_VOL

9.14.30 Start Consistency Group CG_W2K3_GM in cycling mode


In Example 9-194 we start the Consistency Group in cycling mode. Looking at the field freeze_time you can see that the Consistency Group has been started in cycling mode and it is taking consistency images.
Example 9-194 Start Consistency Group with cycling mode

IBM_2145:ITSO_SVC1:admin>startrcconsistgrp CG_W2K3_GM IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 primary aux state consistent_copying relationship_count 2 freeze_time 2011/10/04/21/02/33 status sync copy_type global cycle_period_seconds 300 cycling_mode multi RC_rel_id 0 RC_rel_name GMREL1 RC_rel_id 1 RC_rel_name GMREL2 IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 primary aux state consistent_copying relationship_count 2 freeze_time 2011/10/04/21/07/42 status sync copy_type global cycle_period_seconds 300

604

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

cycling_mode multi RC_rel_id 0 RC_rel_name GMREL1 RC_rel_id 1 RC_rel_name GMREL2

9.15 Service and maintenance


This section details the various service and maintenance tasks that you can execute within the SVC environment.

9.15.1 Upgrading software


This section explains how to upgrade the SVC software.

Package numbering and version


The format for software upgrade packages is four positive integers that are separated by periods. For example, a software upgrade package contains something similar to 6.3.0.0, and each software package is given a unique number. Requirement: It is mandatory that you run at least SVC 5.1.0.7 level of software code before upgrading directly to SVC 6.3.0.0 software code. Check the recommended software levels at this website: http://www-03.ibm.com/systems/storage/software/virtualization/svc/index.html

SVC software upgrade test utility


The SAN Volume Controller Software Upgrade Test Utility, which resides on the Master Console, checks software levels in the system against the recommended levels, which will be documented on the support website. You will be informed if the software levels are current or if you need to download and install newer levels. You can download the utility and installation instructions from this link: http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S4000585 After the software file has been uploaded to the system (to the /home/admin/upgrade directory), you can select the software and apply it to the system. Use the web script and the applysoftware command. When a new code level is applied, it is automatically installed on all of the nodes within the system. The underlying command-line tool runs the sw_preinstall script. This script checks the validity of the upgrade file and whether it can be applied over the current level. If the upgrade file is unsuitable, the pre-install script deletes the files, which prevents the buildup of invalid files on the system.

Precaution before upgrade


Software installation is normally considered to be a clients task. The SVC supports concurrent software upgrade. You can perform the software upgrade concurrently with I/O user operations and certain management activities, but only limited CLI commands will be operational from the time that the install command starts until the upgrade operation has either terminated successfully or been backed out. Certain commands will fail with a message indicating that a software upgrade is in progress.
Chapter 9. SAN Volume Controller operations using the command-line interface

605

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

Before you upgrade the SVC software, ensure that all I/O paths between all hosts and SANs are working. Otherwise, the applications might have I/O failures during the software upgrade. Ensure that all I/O paths between all hosts and SANs are working by using the Subsystem Device Driver (SDD) command. Example 9-195 shows the output.
Example 9-195 Query adapter

#datapath query adapter Active Adapters :2 Adpt# 0 1 Name State fscsi0 NORMAL fscsi1 NORMAL Mode ACTIVE ACTIVE Select 1445 1888 Errors 0 0 Paths 4 4 Active 4 4

#datapath query device Total Devices : 2 DEV#: 0 DEVICE NAME: vpath0 TYPE: 2145 POLICY: Optimized SERIAL: 60050768018201BF2800000000000000 ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk3 OPEN NORMAL 0 0 1 fscsi1/hdisk7 OPEN NORMAL 972 0 DEV#: 1 DEVICE NAME: vpath1 TYPE: 2145 POLICY: Optimized SERIAL: 60050768018201BF2800000000000002 ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk4 OPEN NORMAL 784 0 1 fscsi1/hdisk8 OPEN NORMAL 0 0 Write-through mode: During a software upgrade, there are periods when not all of the nodes in the system are operational and as a result, the cache operates in write-through mode. Note that write-through mode has an effect on the throughput, latency, and bandwidth aspects of performance. Verify that your uninterruptible power supply unit configuration is also set up correctly (even if your system is running without problems). Specifically, make sure that the following conditions are true: Your uninterruptible power supply units are all getting their power from an external source, and they are not daisy chained. Make sure that each uninterruptible power supply unit is not supplying power to another nodes uninterruptible power supply unit. The power cable and the serial cable, which comes from each node, go back to the same uninterruptible power supply unit. If the cables are crossed and go back to separate uninterruptible power supply units, then during the upgrade, while one node is shut down, another node might also mistakenly be shut down. Important: Do not share the SVC uninterruptible power supply unit with any other devices. You must also ensure that all I/O paths are working for each host that runs I/O operations to the SAN during the software upgrade. You can check the I/O paths by using the datapath query commands.

606

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

You do not need to check for hosts that have no active I/O operations to the SAN during the software upgrade.

Procedure
To upgrade the SVC system software, perform the following steps: 1. Before starting the upgrade, you must back up the configuration (see 9.16, Backing up the SVC system configuration on page 621) and save the backup config file in a safe place. 2. Before starting to transfer the sw code to the clustered system clear previously uploaded upgrade files in the /home/admin/upgrade SVC system directory as shown in Example 9-196.
Example 9-196 IBM_2145:ITSO_SVC1:admin>cleardumps -prefix /home/admin/upgrade IBM_2145:ITSO_SVC1:admin>

3. Save the data collection for support diagnosis in case of problems, as shown in Example 9-197.
Example 9-197 svc_snap -c command IBM_2145:ITSO_SVC1:admin>svc_snap -c Collecting system information... Creating Config Backup Dumping error log... Creating Snap data collected in /dumps/snap.110711.111003.111031.tgz

4. List the dump that was generated by the previous command, as shown in Example 9-198.
Example 9-198 lsdumps command IBM_2145:ITSO_SVC1:admin>lsdumps id filename 0 svc.config.cron.bak_108283 1 sel.110711.trc 2 rtc.race_mq_log.txt.110711.trc 3 ethernet.110711.trc 4 svc.config.cron.bak_110711 5 svc.config.cron.xml_110711 6 svc.config.cron.log_110711 7 svc.config.cron.sh_110711 8 svc.config.backup.bak_110711 9 svc.config.backup.tmp.xml 10 110711.trc 11 svc.config.backup.xml_110711 12 svc.config.backup.now.xml 13 snap.110711.111003.111031.tgz

5. Save the generated dump in a safe place using the pscp command, as shown in Example 9-199 on page 608. Note: The pscp command will not work if you have not uploaded your PuTTy SSH private key or if you are not using userid and password into the PuTTy pageant agent as shown in Figure 9-14.

Chapter 9. SAN Volume Controller operations using the command-line interface

607

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 9-14 Pageant example Example 9-199 pscp -load command C:\Program Files (x86)\PuTTY>pscp -load ITSO_SVC1 admin@10.18.228.173:/dumps/snap .110711.111003.111031.tgz c:snap.110711.111003.111031.tgz snap.110711.111003.111031 | 4999 kB | 4999.8 kB/s | ETA: 00:00:00 | 100%

6. Upload the new software package using PuTTY Secure Copy. Enter the command as shown in Example 9-200.
Example 9-200 pscp -load command

C:\Program Files (x86)\PuTTY>pscp -load ITSO_SVC1 c:\IBM2145_INSTALL_6.3.0.0. admin@10.18.228.81:/home/admin/upgrade 110926.tgz.gpg | 353712 kB | 11053.5 kB/s | ETA: 00:00:00 | 100% c. Upload the SAN Volume Controller Software Upgrade Test Utility by using PuTTY Secure Copy. Enter the command as shown in Example 9-201.
Example 9-201 Upload utility

C:\>pscp -load ITSO_SVC1 IBM2145_INSTALL_svcupgradetest_6.1 admin@10.18.229.81:/home/admin/upgrade IBM2145_INSTALL_svcupgrad | 11 kB | 12.0 kB/s | ETA: 00:00:00 | 100% 7. Verify that the packages were successfully delivered through the PuTTY command-line application by entering the lsdumps command, as shown in Example 9-202.
Example 9-202 lsdumps command

IBM_2145:ITSO_SVC1:admin>lsdumps -prefix /home/admin/upgrade id filename 0 IBM2145_INSTALL_6.3.0.0. 1 IBM2145_INSTALL_svcupgradetest_6.1 8. Now that the packages are uploaded, install the SAN Volume Controller Software Upgrade Test Utility, as shown in Example 9-203 on page 609.

608

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

Example 9-203 applysoftware command

IBM_2145:ITSO_SVC1:admin>applysoftware -file IBM2145_INSTALL_svcupgradetest_6.1 CMMVC6227I The package installed successfully. 9. Using the following command, test the upgrade for known issues that might prevent a software upgrade from completing successfully, as shown in Example 9-204.
Example 9-204 svcupgradetest command

IBM_2145:ITSO_SVC1:admin>svcupgradetest -v 6.3.0.0 svcupgradetest version 6.1 Please wait while the tool tests for issues that may prevent a software upgrade from completing successfully. The test will take approximately one minute to complete. The test has not found any problems with the 2145 cluster. Please proceed with the software upgrade. Important: If the svcupgradetest command produces any errors, troubleshoot the errors using the maintenance procedures before continuing. 10.Use the applysoftware command to apply the software upgrade, as shown in Example 9-205.
Example 9-205 Apply upgrade command example

IBM_2145:ITSO_SVC1:admin>applysoftware -file IBM2145_INSTALL_6.3.0.0 While the upgrade runs, you can check the status as shown in Example 9-206.
Example 9-206 Check update status

IBM_2145:ITSO_SVC1:admin>lssoftwareupgradestatus status upgrading 11.The new code is distributed and applied to each node in the SVC system. After installation, each node is automatically restarted one at a time. If a node does not restart automatically during the upgrade, you must repair it manually. 12.Eventually both nodes display Cluster: on line one on the SVC front panel and the name of your system on line two of the panel. Be prepared for a wait (in our case, we waited approximately 40 minutes). Performance: During this process, both your CLI and GUI vary from sluggish (slow) to unresponsive. The important thing is that I/O to the hosts can continue through this process. 13.To verify that the upgrade was successful, you can perform either of the following options: You can run the lssystem and lsnodevpd commands as shown in Example 9-207. (We truncated the lssystem and lsnodevpd information for this example.)
Example 9-207 lssystem and lsnodevpd commands

IBM_2145:ITSO_SVC1:admin>lssystem id 000002006BE04FC4 name ITSO_SVC1

Chapter 9. SAN Volume Controller operations using the command-line interface

609

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

location local partnership bandwidth .


.

cluster_locale en_US time_zone 520 US/Pacific code_level 6.3.0.0 (build 54.0.1109090000) console_IP 10.18.228.81:443 id_alias 000002006BE04FC4 gm_link_tolerance 200 gm_inter_cluster_delay_simulation 20 gm_intra_cluster_delay_simulation 40 gm_max_host_delay 5 .
.

tier_capacity 766.50GB tier_free_capacity 736.50GB has_nas_key no layer appliance IBM_2145:ITSO_SVC1:admin>lsnodevpd 1 id 1 system board: 23 fields part_number 31P1090 .
.

software: 4 fields id 1 node_name SVC1N1 WWNN 0x50050768010027e2 code_level 6.3.0.0 (build 54.0.1109090000) Or you can check whether the code installation has completed without error by copying the log to your management workstation as explained in 9.15.2, Running maintenance procedures on page 610. Open the event log in WordPad and search for the Software Install completed. message. At this point you have completed the required tasks to upgrade the SVC software.

9.15.2 Running maintenance procedures


Use the finderr command to generate a list of any unfixed errors in the system. This command analyzes the last generated log that resides in the /dumps/elogs/ directory on the system. To generate a new log before analyzing unfixed errors, run the dumperrlog command (Example 9-208).
Example 9-208 dumperrlog command

IBM_2145:ITSO_SVC1:admin>dumperrlog

610

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

This command generates a errlog_timestamp file, such as errlog_110711_111003_090500, where: errlog is part of the default prefix for all event log files. 110711 is the panel name of the current configuration node. 111003 is the date (YYMMDD). 090500 is the time (HHMMSS). You can add the -prefix parameter to your command to change the default prefix of errlog to something else (Example 9-209).
Example 9-209 dumperrlog -prefix command

IBM_2145:ITSO_SVC1:admin>dumperrlog -prefix ITSO_SVC1_errlog This command creates a file called ITSO-SVC4_errlog_timestamp. To see the file name, enter the following command (Example 9-210).
Example 9-210 lsdumps command

IBM_2145:ITSO_SVC1:admin>lsdumps -prefix /dumps/elogs id filename 0 errlog_110711_111003_111056 1 testerrorlog_110711_111003_135358 2 ITSO_SVC1_errlog_110711_111003_141111 Maximum number of event log dump files: A maximum of ten event log dump files per node will be kept on the system. When the eleventh dump is made, the oldest existing dump file for that node will be overwritten. Note that the directory might also hold log files retrieved from other nodes. These files are not counted. The SVC will delete the oldest file (when necessary) for this node to maintain the maximum number of files. The SVC will not delete files from other nodes unless you issue the cleardumps command. After you generate your event log you can issue the finderr command to scan the event log for any unfixed events, as shown in Example 9-211.
Example 9-211 finderr command

IBM_2145:ITSO_SVC1:admin>finderr Highest priority unfixed error code is [1550] As you can see, we have one unfixed event on our system. To analyze this event, download it onto your PC. To know more about this unfixed event, look at the event log in more detail. Use the PuTTY Secure Copy process to copy the file from the system to your local management workstation, as shown in Example 9-212.
Example 9-212 pscp command: Copy event logs off of the SVC

In W2K3 Start Run cmd C:\Program Files (x86)\PuTTY>pscp -load ITSO_SVC1 admin@10.18.228.81:/dumps/elog s/ITSO_SVC1_errlog_110711_111003_141111 c:\ITSO_SVC1_errlog_110711_111003_141111 ITSO_SVC1_errlog_110711_1 | 6 kB | 6.8 kB/s | ETA: 00:00:00 | 100%

Chapter 9. SAN Volume Controller operations using the command-line interface

611

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

C:\Program Files (x86)\PuTTY>

To use the Run option, you must know where your pscp.exe file is located. In this case, it is in the C:\Program Files\PuTTY\ folder. This command copies the file called ITSO_SVC1_errlog_110711_111003_141111 to the C:\ directory on our local workstation and calls the file ITSO_SVC1_errlog_110711_111003_141111 Open the file in WordPad (Notepad does not format the window as well). You will see information similar to that is shown in Example 9-213. (We truncated this list for the purposes of this example.)
Example 9-213 errlog in WordPad

//------------------// Error Log Entries //-------------------

Error Log Entry 0 Node Identifier Object Type Object ID Copy ID Sequence Number Root Sequence Number First Error Timestamp Last Error Timestamp Error Count Error ID Error Code Status Flag Type Flag 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

: : : : : : : : : : : : : : : 00 00 00 00 00 00 00 00

SVC1N2 node 2 101 101 Mon Oct 3 10:50:13 2011 Epoch + 1317664213 Mon Oct 3 10:50:13 2011 Epoch + 1317664213 1 980221 : Error log cleared SNMP trap raised INFORMATION 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

By scrolling through, or searching for the term unfixed, you can find more detail about the problem. You might see more entries in the errorlog that have the status of unfixed. After rectifying the problem, you can mark the event as fixed in the log by issuing the cherrstate command against its sequence number; see Example 9-214.
Example 9-214 cherrstate command

IBM_2145:ITSO_SVC1:admin>cherrstate -sequencenumber 106

612

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

If you accidentally mark the wrong event as fixed, you can mark it as unfixed again by entering the same command and appending the -unfix flag to the end, as shown in Example 9-215.
Example 9-215 unfix flag

IBM_2145:ITSO_SVC1:admin>cherrstate -sequencenumber 106 -unfix

9.15.3 Setting up SNMP notification


To set up event notification, use the mksnmpserver command. Example 9-216 shows an example of the mksnmpserver command.
Example 9-216 mksnmpserver command

IBM_2145:ITSO_SVC1:admin>mksnmpserver -error on -warning on -info on 9.43.86.160 -community SVC SNMP Server id [0] successfully created

-ip

This command sends all events and warning to the SVC community on the SNMP manager with the IP address 9.43.86.160.

9.15.4 Set syslog event notification


You can save a syslog to a defined syslog server as the SVC provides support for syslog in addition to email and SNMP traps. The syslog protocol is a client-server standard for forwarding log messages from a sender to a receiver on an IP network. You can use syslog to integrate log messages from various types of systems into a central repository. You can configure SVC to send information to six syslog servers. You use the mksyslogserver command to configure the SVC using the CLI, as shown in Example 9-217. Using this command with the -h parameter gives you information about all of the available options. In our example, we only configure the SVC to use the default values for our syslog server.
Example 9-217 Configuring the syslog

IBM_2145:ITSO_SVC1:admin>mksyslogserver -ip 10.64.210.231 -name Syslogserv1 Syslog Server id [0] successfully created When we have configured our syslog server, we can display the current syslog server configurations in our system, as shown in Example 9-218.
Example 9-218 lssyslogserver command

IBM_2145:ITSO_SVC1:admin>lssyslogserver id name IP_address facility error warning info 0 Syslogserv1 10.64.210.231 0 on on on 1 Syslogserv1 10.64.210.231 on on on

Chapter 9. SAN Volume Controller operations using the command-line interface

613

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

9.15.5 Configuring error notification using an email server


The SVC can use an email server to send event notification and inventory emails to email users. It can transmit any combination of events, warning, and informational notification types. The SVC supports up to six email servers to provide redundant access to the external email network. The SVC uses the email servers in sequence until the email is successfully sent from the SVC. Important: Before the SVC can start sending emails we must run the startemail command, which enables this service. The attempt is successful when the SVC gets a positive acknowledgement from an email server that the email has been received by the server. If no port is specified, port 25 is the default port, as shown in Example 9-219.
Example 9-219 The mkemailserver command syntax

IBM_2145:ITSO_SVC1:admin>mkemailserver -ip 192.168.1.1 Email Server id [0] successfully created IBM_2145:ITSO_SVC1:admin>lsemailserver 0 id 0 name emailserver0 IP_address 192.168.1.1 port 25 We can configure an email user that will receive email notifications from the SVC system. We can define 12 users to receive emails from our SVC. Using the lsemailuser command, we can verify who is already registered and what type of information is sent to that user, as shown in Example 9-220.
Example 9-220 lsemailuser command

IBM_2145:ITSO_SVC1:admin>lsemailuser id name address user_type error warning info 0 IBM_Support_Center callhome0@de.ibm.com support on off off

inventory on

We can also create a new user, as shown in Example 9-221 for a SAN administrator.
Example 9-221 mkemailuser command

IBM_2145:ITSO_SVC1:admin>mkemailuser -address SANadmin@ibm.com -error on -warning on -info on -inventory on User, id [0], successfully created

9.15.6 Analyzing the event log


The following types of events are logged in the event log: Events: an occurrence of significance to a task or system. Events can include completion or failure of an operation, a user action, or the change in state of a process. Node Event codes now have two classifications: 614
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

Critical: Events which put the node into service state and prevent the node from joining the system 500 699 Note: Deleting a node from a system will cause nodes to enter service state as well. Non-critical: Partial hardware faults (for example, one PSU failed in 2145-CF8) 800 - 899 To display the event log use the lseventlog command, as shown in Example 9-222 on page 616. IBM_2145:ITSO_SVC1:admin>lseventlog -count 2 sequence_number last_timestamp object_type object_id object_name copy_id status fixed event_id error_code description 102 111003105018 cluster ITSO_SVC1 message no 981004 FC discovery occurred, no configuration changes were detected 103 111003111036 cluster ITSO_SVC1 message no 981004 FC discovery occurred, no configuration changes were detected IBM_2145:ITSO_SVC1:admin>lseventlog 103 sequence_number 103 first_timestamp 111003111036 first_timestamp_epoch 1317665436 last_timestamp 111003111036 last_timestamp_epoch 1317665436 object_type cluster object_id object_name ITSO_SVC1 copy_id reporting_node_id 1 reporting_node_name SVC1N1 root_sequence_number event_count 1 status message fixed no auto_fixed no notification_type informational event_id 981004 event_id_text FC discovery occurred, no error_code error_code_text sense1 01 01 00 00 7E 0B 00 00 04 02 00 sense2 00 00 00 00 10 00 00 00 08 00 08 sense3 00 00 00 00 00 00 00 00 F2 FF 01 sense4 0E 00 00 00 FC FF FF FF 03 00 00 sense5 00 00 06 00 00 00 00 00 00 00 00 sense6 00 00 00 00 03 00 00 00 00 00 00 sense7 00 00 00 00 00 00 00 00 00 00 00 sense8 00 00 00 00 00 00 00 00 00 00 00

configuration changes were detected

00 00 00 00 00 00 00 00

01 00 00 07 00 00 00 00

00 00 00 00 00 00 00 00

01 00 00 00 00 00 00 00

00 00 00 00 00 00 00 00

These commands allow you to view the last events (you can specify -count parameter to define how many event you need to display) that were generated. Use the method described

Chapter 9. SAN Volume Controller operations using the command-line interface

615

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

in 9.15.2, Running maintenance procedures on page 610 to upload and analyze the event log in more detail. To clear the event log, you can issue the clearerrlog command, as shown in Example 9-222.
Example 9-222 clearerrlog command

IBM_2145:ITSO_SVC1:admin>clearerrlog Do you really want to clear the log?

Using the -force flag will stop any confirmation requests from appearing. When executed, this command will clear all of the entries from the event log. This process will proceed even if there are unfixed errors in the log. It also clears any status events that are in the log. Note: This command is a destructive command for the event log. Only use this command when you have either rebuilt the system, or when you have fixed a major problem that has caused many entries in the event log that you do not want to fix manually.

9.15.7 License settings


To change the licensing feature settings, use the chlicense command. Before you change the licensing, you can display the licenses that you already have by issuing the lslicense command, as shown in Example 9-223.
Example 9-223 lslicense command

IBM_2145:ITSO_SVC1:admin>lslicense used_flash 0.00 used_remote 0.03 used_virtualization 0.75 license_flash 500 license_remote 500 license_virtualization 500 license_physical_disks 0 license_physical_flash off license_physical_remote off The current license settings for the system are displayed in the viewing license settings log window. These settings show whether you are licensed to use the FlashCopy, Metro Mirror, Global Mirror, or Virtualization features. They also show the storage capacity that is licensed for virtualization. Typically, the license settings log contains entries, because feature options must be set as part of the web-based system creation process. Consider, for example, that you have purchased an additional 5 TB of licensing for the Metro Mirror and Global Mirror feature from your actual 20 TB license. Example 9-224 shows the command that you enter.
Example 9-224 chlicense command

IBM_2145:ITSO_SVC1:admin>chlicense -remote 25 To turn a feature off, add 0 TB as the capacity for the feature that you want to disable.

616

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

To verify that the changes you have made are reflected in your SVC configuration, you can issue the lslicense command as before (see Example 9-225).
Example 9-225 lslicense command: Verifying changes

IBM_2145:ITSO_SVC1:admin>lslicense used_flash 0.00 used_remote 0.03 used_virtualization 0.75 license_flash 500 license_remote 25 license_virtualization 500 license_physical_disks 0 license_physical_flash off license_physical_remote off

9.15.8 Listing dumps


Starting with SVC 6.3 a new command is available to list the dumps that were generated over a period of time. You can use lsdumps with the -prefix parameter, to return a list of dumps in the appropriate directory. The command produces a list of the files in the specified directory on the specified node. If no node is specified then the config node is used. If no prefix is set then the files in the /dumps directory are listed.

Error or event dump


The dumps that are contained in the /dumps/elogs directory are dumps of the contents of the event log at the time that the dump was taken. You create an error or event log dump by using the dumperrlog command. This command dumps the contents of the error or event log to the /dumps/elogs directory. If you do not supply a file name prefix, the system uses the default errlog_ file name prefix. The full, default file name is errlog_NNNNNN_YYMMDD_HHMMSS. In this file name, NNNNNN is the node front panel name. If the command is used with the -prefix option, the value that is entered for the -prefix is used instead of errlog. The lsdumps with -prefix command lists all of the dumps in the /dumps/elogs directory (Example 9-226).
Example 9-226 lsdumps -prefix /dumps/elogs

IBM_2145:ITSO_SVC1:admin>lsdumps -prefix /dumps/elogs id filename 0 errlog_110711_111003_111056 1 testerrorlog_110711_111003_135358 2 ITSO_SVC1_errlog_110711_111003_141111 3 ITSO_SVC1_errlog_110711_111003_141620 4 errlog_110711_111003_154759

Featurization log dump


The dumps that are contained in the /dumps/feature directory are dumps of the featurization log. A featurization log dump is created by using the dumpinternallog command. This command dumps the contents of the featurization log to the /dumps/feature directory to a file called feature.txt. Only one of these files exists, so every time that the dumpinternallog command is run, this file is overwritten.

Chapter 9. SAN Volume Controller operations using the command-line interface

617

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

The lsdumps command with -prefix /dumps/feature lists all of the dumps in the /dumps/feature directory (Example 9-227).
Example 9-227 lsdumps with -prefix /dumps/feature command

IBM_2145:ITSO_SVC1:admin>lsdumps -prefix /dumps/feature id filename 0 feature.txt

I/O trace dump


Dumps that are contained in the /dumps/iotrace directory are dumps of I/O trace data. The type of data that is traced depends on the options that are specified by the settrace command. The collection of the I/O trace data is started by using the starttrace command. The I/O trace data collection is stopped when the stoptrace command is used. When the trace is stopped, the data is written to the file. The file name is prefix_NNNNNN_YYMMDD_HHMMSS, where NNNNNN is the node front panel name, and prefix is the value that is entered by the user for the -filename parameter in the settrace command. The command to list all of the dumps in the /dumps/iotrace directory is the lsdumps command with -prefix /dumps/iotrace (Example 9-228).
Example 9-228 lsdumps with -prefix /dumps/iotrace command

IBM_2145:ITSO_SVC1:admin>lsdumps -prefix /dumps/iotrace id iotrace_filename 0 tracedump_104643_080624_172208 1 iotrace_104643_080624_172451

I/O statistics dump


The dumps that are contained in the /dumps/iostats directory are the dumps of the I/O statistics for the disks on the cluster. An I/O statistics dump is created by using the startstats command. As part of this command, you can specify a time interval at which you want the statistics to be written to the file (the default is 15 minutes). Every time that the time interval is encountered, the I/O statistics that are collected up to this point are written to a file in the /dumps/iostats directory. The file names that are used for storing I/O statistics dumps are m_stats_NNNNNN_YYMMDD_HHMMSS or v_stats_NNNNNN_YYMMDD_HHMMSS, depending on whether the statistics are for MDisks or volumes. In these file names, NNNNNN is the node front panel name. The command to list all of the dumps that are in the /dumps/iostats directory is the lsdumps with -prefix command (Example 9-229).
Example 9-229 lsdumps with -prefix /dumps/iostats command

IBM_2145:ITSO_SVC1:admin>lsdumps -prefix /dumps/iostats id filename 0 Nm_stats_110711_111003_125706 1 Nn_stats_110711_111003_125706 2 Nv_stats_110711_111003_125706 3 Nd_stats_110711_111003_125706 4 Nv_stats_110711_111003_131204 5 Nd_stats_110711_111003_131204

618

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

6 Nn_stats_110711_111003_131204 ........

Software dump
The lsdumps command lists the contents of the /dumps directory. In this directory are copied general debug information, software, application dumps and livedumps. Example 9-230 shows the command.
Example 9-230 lsdumps command without prefix

IBM_2145:ITSO_SVC1:admin>lsdumps id filename 0 svc.config.cron.bak_108283 1 sel.110711.trc 2 rtc.race_mq_log.txt.110711.trc 3 ethernet.110711.trc 4 svc.config.cron.bak_110711 5 svc.config.cron.xml_110711 6 svc.config.cron.log_110711 7 svc.config.cron.sh_110711 8 svc.config.backup.bak_110711 9 svc.config.backup.tmp.xml 10 110711.trc 11 svc.config.backup.xml_110711 12 svc.config.backup.now.xml 13 snap.110711.111003.111031.tgz

Other node dumps


lsdumps commands can accept a node identifier as input (for example, append the node name to the end of any of the node dump commands). If this identifier is not specified, the list of files on the current configuration node is displayed. If the node identifier is specified, the list of files on that node is displayed. However, files can only be copied from the current configuration node (using PuTTY Secure Copy). Therefore, you must issue the cpdumps command to copy the files from a non-configuration node to the current configuration node. Subsequently, you can copy them to the management workstation using PuTTY Secure Copy. For example, suppose you discover a dump file and want to copy it to your management workstation for further analysis. In this case, you must first copy the file to your current configuration node. To copy dumps from other nodes to the configuration node, use the cpdumps command. In addition to the directory, you can specify a file filter. For example, if you specified /dumps/elogs/*.txt, all of the files in the /dumps/elogs directory that end in.txt are copied.

Chapter 9. SAN Volume Controller operations using the command-line interface

619

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

Wildcards: The following rules apply to the use of wildcards with the SAN Volume Controller CLI: The wildcard character is an asterisk (*). The command can contain a maximum of one wildcard. When you use a wildcard, you must surround the filter entry with double quotation marks (""), for example: >cleardumps -prefix "/dumps/elogs/*.txt" Example 9-231 shows an example of the cpdumps command.
Example 9-231 cpdumps command

IBM_2145:ITSO_SVC1:admin>cpdumps -prefix /dumps/configs n4 Now that you have copied the configuration dump file from Node n4 to your configuration node, you can use PuTTY Secure Copy to copy the file to your management workstation for further analysis. To clear the dumps, you can run the cleardumps command. Again, you can append the node name if you want to clear dumps off of a node other than the current configuration node (the default for the cleardumps command). The commands in Example 9-232 clear all logs or dumps from the SVC Node SVC1N2.
Example 9-232 cleardumps command

IBM_2145:ITSO_SVC1:admin>cleardumps IBM_2145:ITSO_SVC1:admin>cleardumps IBM_2145:ITSO_SVC1:admin>cleardumps IBM_2145:ITSO_SVC1:admin>cleardumps IBM_2145:ITSO_SVC1:admin>cleardumps IBM_2145:ITSO_SVC1:admin>cleardumps IBM_2145:ITSO_SVC1:admin>cleardumps

-prefix -prefix -prefix -prefix -prefix -prefix -prefix

/dumps SVC1N2 /dumps/iostats SVC1N2 /dumps/iotrace SVC1N2 /dumps/feature SVC1N2 /dumps/config SVC1N2 /dumps/elog SVC1N2 /home/admin/upgrade SVC1N2

620

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

9.16 Backing up the SVC system configuration


You can back up your system configuration by using the Backing Up a Cluster Configuration window or the CLI svcconfig command. In this section, we describe the overall procedure for backing up your system configuration and the conditions that must be satisfied to perform a successful backup. The backup command extracts configuration data from the system and saves it to the svc.config.backup.xml file in the /tmp directory. This process also produces an svc.config.backup.sh file. You can study this file to see what other commands were issued to extract information. An svc.config.backup.log log is also produced. You can study this log for the details of what was done and when it was done. This log also includes information about the other commands that were issued. Any pre-existing svc.config.backup.xml file is archived as the svc.config.backup.bak file. The system only keeps one archive. We strongly suggest that you immediately move the .XML file and related KEY files (see the following limitations) off the system for archiving. Then erase the files from the /tmp directory using the svcconfig clear -all command. We further advise that you change all of the objects having default names to non-default names. Otherwise, a warning is produced for objects with default names. Also, the object with the default name is restored with its original name with an _r appended. The prefix _(underscore) is reserved for backup and restore command usage. Do not use this prefix in any object names. Important: The tool backs up logical configuration data only, not client data. It does not replace a traditional data backup and restore tool, but this tool supplements a traditional data backup and restore tool with a way to back up and restore the clients configuration. To provide a complete backup and disaster recovery solution, you must back up both user (non-configuration) data and configuration (non-user) data. After the restoration of the SVC configuration, you must fully restore user (non-configuration) data to the systems disks.

9.16.1 Prerequisites
You must have the following prerequisites in place: All nodes must be online. No object name can begin with an underscore. All objects must have non-default names, that is, names that are not assigned by the SVC. Although we advise that objects have non-default names at the time that the backup is taken, this prerequisite is not mandatory. Objects with default names are renamed when they are restored. Example 9-233 shows an example of the svcconfig backup command.
Example 9-233 svcconfig backup command

IBM_2145:ITSO_SVC1:admin>svcconfig backup .................. CMMVC6130W Cluster ITSO_SVC4 with inter-cluster partnership fully_configured will not be restored

Chapter 9. SAN Volume Controller operations using the command-line interface

621

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

.................................................................................. ....... CMMVC6155I SVCCONFIG processing completed successfully As you can see in Example 9-233 on page 621, we received a CMMVC6130W Cluster ITSO_SVC4 with inter-cluster partnership fully_configured will not be restored message. This message indicates that individual systems in a multi system environment will need to be backed-up individually. In the event that recovery is required, recovery will only be performed on the system where the recovery commands are executed. Example 9-234 shows the pscp command.
Example 9-234 pscp command

C:\Program Files\PuTTY>pscp -load ITSO_SVC1 admin@10.18.229.81:/tmp/svc.config.backup.xml c:\temp\clibackup.xml clibackup.xml | 97 kB | 97.2 kB/s | ETA: 00:00:00 | 100% The following scenario illustrates the value of configuration backup: 1. Use the svcconfig command to create a backup file on the clustered system that contains details about the current system configuration. 2. Store the backup configuration on a form of tertiary storage. You must copy the backup file from the clustered system or it becomes lost if the system crashes. 3. If a sufficiently severe failure occurs, the system might be lost. Both the configuration data (for example, the system definitions of hosts, I/O Groups, MDGs, and MDisks) and the application data on the virtualized disks are lost. In this scenario, it is assumed that the application data can be restored from normal client backup procedures. However, before you can perform this restoration, you must reinstate the system as it was configured at the time of the failure. Therefore, you restore the same MDGs, I/O Groups, host definitions, and volumes that existed prior to the failure. Then you can copy the application data back onto these volumes and resume operations. 4. Recover the hardware: hosts, SVCs, disk controller systems, disks, and SAN fabric. The hardware and SAN fabric must physically be the same as the hardware and SAN fabric that were used before the failure. 5. Re-initialize the clustered system with the configuration node; the other nodes will be recovered when restoring the configuration. 6. Restore your clustered system configuration using the backup configuration file that was generated prior to the failure. 7. Restore the data on your volumes using your preferred restoration solution or with help from IBM Service. 8. Resume normal operations.

9.17 Restoring the SVC clustered system configuration


Attention: It is extremely important that you always consult IBM Support before you restore the SVC clustered system configuration from the backup. IBM Support can assist you in analyzing the root cause of why the system configuration was lost.

622

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

After the svcconfig restore -execute command is started, consider any prior user data on the volumes destroyed. The user data must be recovered through your usual application data backup and restore process. See IBM TotalStorage Open Software Family SAN Volume Controller: Command-Line Interface Users Guide, GC27-2287, for more information about this topic. For a detailed description of the SVC configuration backup and restore functions, see IBM TotalStorage Open Software Family SAN Volume Controller: Configuration Guide, GC27-2286.

9.17.1 Deleting configuration backup


Here we describe in detail the tasks that you can perform to delete the configuration backup that is stored in the configuration file directory on the system. Never clear this configuration without having a backup of your configuration stored in a separate, secure place. When using the clear command, you erase the files in the /tmp directory. This command does not clear the running configuration and prevent the system from working, but the command clears all of the configuration backup that is stored in the /tmp directory; see Example 9-235.
Example 9-235 svcconfig clear command

IBM_2145:ITSO_SVC1:admin>svcconfig clear -all . CMMVC6155I SVCCONFIG processing completed successfully

Chapter 9. SAN Volume Controller operations using the command-line interface

623

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

9.18 Working with the SVC Quorum MDisk


In this section we show how to list and change the SVC system Quorum Managed Disk.

9.18.1 Listing the SVC Quorum MDisk


To list SVC system Quorum MDisks and view their number and status, issue the lsquorum command as shown in Example 9-236. For more information about SVC Quorum Disk planning and configuration, see Chapter 3, Planning and configuration on page 67.
Example 9-236 lsquorum command and detail

IBM_2145:ITSO_SVC1:admin>lsquorum quorum_index status id name controller_id override 0 online 1 mdisk1 2 1 online 0 mdisk0 2 2 online 3 mdisk3 2 IBM_2145:ITSO_SVC1:admin>lsquorum 1 quorum_index 1 status online id 0 name mdisk0 controller_id 2 controller_name ITSO-DS3500 active yes object_type mdisk override no

controller_name active object_type ITSO-DS3500 ITSO-DS3500 ITSO-DS3500 no yes no mdisk mdisk mdisk no no no

9.18.2 Changing the SVC Quorum Disk


To move one of your SVC Quorum MDisks from one MDisk to another, or from one storage subsystem to another, use the chquorum command as shown in Example 9-237.
Example 9-237 chquorum command

IBM_2145:ITSO_SVC1:admin>lsquorum quorum_index status id name controller_id override 0 online 1 mdisk1 2 1 online 0 mdisk0 2 2 online 3 mdisk3 2

controller_name active object_type ITSO-DS3500 ITSO-DS3500 ITSO-DS3500 no yes no mdisk mdisk mdisk no no no

chquorum -mdisk 9 2

IBM_2145:ITSO_SVC1:admin>lsquorum quorum_index status id name controller_id controller_name active object_type override 0 online 1 mdisk1 2 ITSO-DS3500 no mdisk no 1 online 0 mdisk0 2 ITSO-DS3500 yes mdisk no

624

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

online 9

mdisk9 3

ITSO-DS5000

no

mdisk

no

As you can see in Example 9-237 on page 624, the quorum index 2 has been moved from MDisk3 on ITSO-DS3500 controller to MDisk9 on ITSO-DS5000 controller.

Chapter 9. SAN Volume Controller operations using the command-line interface

625

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

9.19 Working with the Service Assistant menu


SVC V6.1 introduced a new method for performing service tasks on the system. In addition to being able to perform service tasks from the front panel, you can now also service a node through an Ethernet connection using either a web browser or the CLI. The web browser runs a new service application called the Service Assistant. Service Assistant offers almost all of the function that was previously available through the front panel, but it is now available from the Ethernet connection with an interface that is easier to use and that you can use remotely from the system.

9.19.1 SVC CLI Service Assistant menu


A set of commands relating to the new method for performing service tasks on the system has been introduced. Two major command sets are available: The sainfo command set allows you to query the various components within the SVC environment. The satask command set allows you to make changes to the various components within the SVC. When the command syntax is shown, you will see certain parameters in square brackets, for example [parameter], indicating that the parameter is optional in most if not all instances. Any information that is not in square brackets is required information. You can view the syntax of a command by entering one of the following commands: sainfo satask sainfo satask -?: Shows a complete list of information commands. -?: Shows a complete list of task commands. commandname -?: Shows the syntax of information commands. commandname -?: Shows the syntax of task commands.

Example 9-238 shows the two new set of commands introduced with Service Assistant.
Example 9-238 sainfo and satask command

IBM_2145:ITSO_SVC1:admin>sainfo -h The following actions are available with this command : lscmdstatus lsfiles lsservicenodes lsservicerecommendation lsservicestatus IBM_2145:ITSO_SVC1:admin>satask -h The following actions are available with this command : chenclosurevpd chnodeled chserviceip chwwnn cpfiles installsoftware leavecluster mkcluster rescuenode setlocale setpacedccu

626

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

settempsshkey snap startservice stopnode stopservice t3recovery Attention: The sainfo and satask command set usage must be performed under IBM Support direction. Incorrect use of those commands can lead to unexpected results.

9.20 SAN troubleshooting and data collection


When we encounter a SAN issue, the SVC is often extremely helpful in troubleshooting the SAN because the SVC is the at the center of the environment through which the communication travels. SAN Volume Controller Best Practices and Performance Guidelines, SG24-7521, contains a detailed description of how to troubleshoot and collect data from the SVC: http://www.redbooks.ibm.com/abstracts/sg247521.html?Open Use the lsfabric command regularly to obtain a complete picture about what is connected and visible from the SVC cluster through the SAN. The lsfabric command generates a report that displays the Fibre Channel connectivity between nodes, controllers, and hosts. Example 9-239 shows the report of an lsfabric command.
Example 9-239 lsfabric command

IBM_2145:ITSO_SVC1:admin>lsfabric remote_wwpn remote_nportid id node_name local_wwpn local_nportid state name cluster_name type 5005076801405034 030A00 1 SVC1N1 50050768014027E2 active SVC1N2 ITSO_SVC1 node 5005076801405034 030A00 1 SVC1N1 50050768011027E2 active SVC1N2 ITSO_SVC1 node 5005076801305034 040A00 1 SVC1N1 50050768013027E2 active SVC1N2 ITSO_SVC1 node 5005076801305034 040A00 1 SVC1N1 50050768012027E2 active SVC1N2 ITSO_SVC1 node 50050768012027E2 040900 2 SVC1N2 5005076801305034 active SVC1N1 ITSO_SVC1 node 50050768012027E2 040900 2 SVC1N2 5005076801205034 active SVC1N1 ITSO_SVC1 node 500507680120505C 040F00 1 SVC1N1 50050768013027E2 active SVC4N2 ITSO_SVC4 node 500507680120505C 040F00 1 SVC1N1 50050768012027E2 active SVC4N2 ITSO_SVC4 node 500507680120505C 040F00 2 SVC1N2 5005076801305034 active SVC4N2 ITSO_SVC4 node 500507680120505C 040F00 2 SVC1N2 5005076801205034 active SVC4N2 ITSO_SVC4 node 50050768013027E2 040800 2 SVC1N2 5005076801305034 active SVC1N1 ITSO_SVC1 node ....

local_port 1 3 2 4 2 4 2 4 2 4 2 030800 030900 040800 040900 040A00 040B00 040800 040900 040A00 040B00 040A00

Chapter 9. SAN Volume Controller operations using the command-line interface

627

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

Above and below rows has been removed for brevity .... 20690080E51B09E8 041900 1 SVC1N1 50050768013027E2 inactive ITSO-DS3500 controller 20690080E51B09E8 041900 1 SVC1N1 50050768012027E2 inactive ITSO-DS3500 controller 20690080E51B09E8 041900 2 SVC1N2 5005076801305034 inactive ITSO-DS3500 controller 20690080E51B09E8 041900 2 SVC1N2 5005076801205034 inactive ITSO-DS3500 controller 50050768013037DC 041400 1 SVC1N1 50050768013027E2 active ITSOSVC3N1 ITSO_SVC3 node 50050768013037DC 041400 1 SVC1N1 50050768012027E2 active ITSOSVC3N1 ITSO_SVC3 node 50050768013037DC 041400 2 SVC1N2 5005076801305034 active ITSOSVC3N1 ITSO_SVC3 node 50050768013037DC 041400 2 SVC1N2 5005076801205034 active ITSOSVC3N1 ITSO_SVC3 node 5005076801101D1C 031500 1 SVC1N1 50050768014027E2 active ITSOSVC3N2 ITSO_SVC3 node 5005076801101D1C 031500 1 SVC1N1 50050768011027E2 active ITSOSVC3N2 ITSO_SVC3 node 5005076801101D1C 031500 2 SVC1N2 5005076801405034 active ITSOSVC3N2 ITSO_SVC3 node ..... Above and below rows has been removed for brevity ..... 5005076801201D22 021300 1 SVC1N1 50050768013027E2 active SVC2N2 ITSO_SVC2 node 5005076801201D22 021300 1 SVC1N1 50050768012027E2 active SVC2N2 ITSO_SVC2 node 5005076801201D22 021300 2 SVC1N2 5005076801305034 active SVC2N2 ITSO_SVC2 node 5005076801201D22 021300 2 SVC1N2 5005076801205034 active SVC2N2 ITSO_SVC2 node 50050768011037DC 011513 1 SVC1N1 50050768014027E2 active ITSOSVC3N1 ITSO_SVC3 node 50050768011037DC 011513 1 SVC1N1 50050768011027E2 active ITSOSVC3N1 ITSO_SVC3 node 50050768011037DC 011513 2 SVC1N2 5005076801405034 active ITSOSVC3N1 ITSO_SVC3 node 50050768011037DC 011513 2 SVC1N2 5005076801105034 active ITSOSVC3N1 ITSO_SVC3 node 5005076801301D22 021200 1 SVC1N1 50050768013027E2 active SVC2N2 ITSO_SVC2 node 5005076801301D22 021200 1 SVC1N1 50050768012027E2 active SVC2N2 ITSO_SVC2 node .... Above and below rows has been removed for brevity ....

2 4 2 4 2 4 2 4 1 3 1

040800 040900 040A00 040B00 040800 040900 040A00 040B00 030800 030900 030A00

2 4 2 4 1 3 1 3 2 4

040800 040900 040A00 040B00 030800 030900 030A00 030B00 040800 040900

For more detail about the lsfabric command, see IBM System Storage SAN Volume Controller and Storwize V7000 Command-Line Interface User's Guide Version 6.3.0 GC27-2287.

628

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 09 CLI Operations Massimo.fm

9.21 T3 recovery process


A procedure known as T3 recovery has been tested and used in select cases where a system has been completely destroyed. (One example is simultaneously pulling power cords from all nodes to their uninterruptible power supply units; in this case, all nodes boot up to node error 578 when the power is restored.) This procedure, in certain circumstances, is able to recover most user data. However, it is not to be used by the client or IBM service representative without direct involvement from IBM level 3 technical support. This procedure is not published, but we refer to it here only to indicate that the loss of a system can be recoverable without total data loss. However, it requires a restoration of application data from the backup. T3 recovery is an extremely sensitive procedure that is only to be used as a last resort, and it cannot recover any data that was destaged from cache at the time of the total system failure.

Chapter 9. SAN Volume Controller operations using the command-line interface

629

7933 09 CLI Operations Massimo.fm

Draft Document for Review January 17, 2012 6:10 am

630

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

10

Chapter 10.

SAN Volume Controller operations using the GUI


In this chapter we illustrate IBM System Storage SAN Volume Controller (SVC) operational management using the SVC GUI. The information is divided into normal operations and advanced operations. We explain the basic configuration procedures that are required to get your SVC environment running as quickly as possible using its GUI. In Chapter 2, IBM System Storage SAN Volume Controller on page 9, we describe the features in greater depth. Here, we focus on the operational aspects.

Copyright IBM Corp. 2011. All rights reserved.

631

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

10.1 SVC normal operations using the GUI


In this section we discuss several of the operations that we have defined as normal, day-to-day activities. It is possible for many users to be logged into the GUI at any given time. However, no locking mechanism exists, so be aware that if two users change the same object at the same time, the last action entered from the GUI is the one that will take effect. Important: Data entries made through the GUI are case sensitive.

10.1.1 Introduction to SVC normal operations using the GUI


The SVC Home panel (Figure 10-1) is an important panel and is referred to as the Home panel throughout this chapter. (We expect users to be able to locate this panel without displaying it each time.)

Figure 10-1 The Home panel

From this Home panel, on the left panel, there is a dynamic menu.

Dynamic menu
This new version of the SVC GUI includes a new dynamic menu located in the left column of the window. To navigate using this menu, move the mouse over the various icons and choose a page that you want to display, as shown in Figure 10-2 on page 633.

632

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-2 The dynamic menu in the left column

A non-dynamic version of this menu exists for slow connections. To access the non-dynamic menu, select Low graphics mode as shown in Figure 10-3.

Figure 10-3 The SVC GUI Login panel

Figure 10-4 on page 634 shows the non-dynamic version of the menu.

Chapter 10. SAN Volume Controller operations using the GUI

633

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-4 Non-dynamic menu in the left column

In this case, in the upper part of the page there is a pull down menu for navigating between submenus. For example, in Figure 10-4, Volumes, Volumes by Pool, and Volumes by Host are submenus (pull down menus) for the Volumes menu.

Persistent state notification Status Areas


A control panel is available in the bottom part of the window. This dashboard is divided into three Status Areas and it provides information about your cluster. These persistent state notification widgets are reduced by default, as shown in Figure 10-5.

Figure 10-5 Control panel view

Following is a description of each Status Area.

Health Status Area


The rightmost area of the control panel provides information about connectivity; see Figure 10-6.

Figure 10-6 Health Status Area

If there are issues on your cluster nodes, external storage, or remote partnerships, you will be informed here, as shown in Figure 10-7.

634

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-7 Node Status error

You will be able to fix the error by clicking on the Status Alert Bar, which will direct you to the troubleshooting panel.

Storage Allocation Area


The area in the Leftmost side provides information about the storage allocation, as shown in Figure 10-8.

Figure 10-8 Storage Allocation Area

The following information is displayed in this window. To view all of them, you need to use the up and down arrows: Allocated Capacity Free Capacity Physical Capacity Virtual Capacity Over-allocation

Long Running Tasks Area


The middle area provides information about the running tasks, as shown in Figure 10-9 on page 635. Information such as Volume Migration, MDisk Removal, Image Mode Migration, Extend Migration, FlashCopy, Metro Mirror and Global Mirror, Volume Formatting, Space Efficient copy repair, Volume copy verification, and Volume copy synchronization, are displayed in this window.

Figure 10-9 Long Running Tasks Area

By clicking within the square, as shown in Figure 10-9, it also provides information about recently completed tasks, as shown in Figure 10-10.

Chapter 10. SAN Volume Controller operations using the GUI

635

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-10 Recently Completed Tasks information

10.1.2 Organizing on window content


The following sections describe several windows within the SVC GUI where you can perform filtering (to minimize the amount of data that is shown on the window) and sorting and reorganizing (to organize the content on the window). This section provides a brief overview of these functions.

Table filtering
In most pages, in the upper right corner of the window, there is a search field to filter the elements, which is useful if the list of entries is too large to work with. Perform these steps to use search filtering: 1. Enter a value in the search box in the upper right corner of the window, as shown in Figure 10-11 on page 636.

Figure 10-11 Show Filter Row icon

636

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

2. Click the

icon or press enter

3. This function enables you to filter your table based on the column names. In this example, a volume list is displayed containing names that include ESX somewhere in the name highlighted by the amber colour as shown in Figure 10-12. Please note, the search option are not case sensitive.

Figure 10-12 Show Filter Row

4. You can remove this filtered view by clicking Reset, as shown in Figure 10-13 on page 637.

Figure 10-13 Reset the filtered view

Note: This filtering option is available in most pages.

Table information
With SVC 6.3, you are able to add or remove additional information in the tables available on most pages. As an example, in the Volumes page we will add a column to our table. 1. Right-click the top part of the table, at the empty table; see Figure 10-14. A menu with all available columns appears.

Chapter 10. SAN Volume Controller operations using the GUI

637

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-14 Add or remove details in a table

2. Select the column that you want to add (or remove) from this table. In our example, we added the volume ID column as shown in Figure 10-15 on page 638.

Figure 10-15 Table with an added ID column

3. You can repeat this process several times to create custom tables that meet your requirements.

638

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Reorganizing columns in tables


You are able to move columns by pressing the left mouse button and moving the column, as shown in Figure 10-16.

Figure 10-16 Reorganizing table columns

Sorting
Regardless of whether you use filter options, you can sort the displayed data by clicking one column's table as shown in Figure 10-17. In this example, we sort the table by volume ID.

Chapter 10. SAN Volume Controller operations using the GUI

639

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-17 Selecting on column to sort using this field.

After we click the volume ID column, the table is sorted by volume ID as shown in Figure 10-18 on page 640.

Figure 10-18 Table sort by ID volume

640

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Note: By repeatedly clicking a column, you can sort this table based on that column in ascending or descending order.

10.1.3 Help
To access online help, click the Help link in the upper right corner of any panel, as shown in Figure 10-19.

Figure 10-19 Help link

This action opens a new window where you can find help on different topics (see Figure 10-20).

Figure 10-20 Help window

10.2 Working with External Disk Controllers


This section describes the various configuration and administration tasks that you can perform on External Disk Controllers within the SVC environment.

10.2.1 Viewing Disk Controller details


Perform the following steps to view information about a back-end disk controller in use by the SVC environment: 1. Select Physical Storage in the dynamic menu and then select External. 2. The External panel shown in Figure 10-21 opens. For more detailed information about a specific controller, click one Storage System in the left column (highlighted in the figure).
Chapter 10. SAN Volume Controller operations using the GUI

641

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-21 Disk controller systems

10.2.2 Renaming a disk controller


Perform the following steps to rename a disk controller that is used by the SVC cluster: 1. In the left panel, select the controller that you want to rename. Click its name to rename it, as shown in Figure 10-22.

Figure 10-22 Renaming a Storage System

642

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

2. Type the new name that you want to assign to the controller, and press Enter as shown in Figure 10-23.

Figure 10-23 Changing the name for Storage System

Controller name: The name can consist of the letters A to Z and a to z, the numbers 0 to 9, the dash (-), and the underscore (_) character. The name can be between one and 63 characters in length. However, the name cannot start with a number, the dash or the underscore. 3. A task is launched to change the name of this Storage System. When it is completed, you can close this window. 4. The new name of your controller is displayed on the Disk Controller Systems panel.

10.2.3 Discovering MDisks from the External panel


You can discover managed disks (MDisk) from the External panel. Perform the following steps to discover new MDisks: 1. Select a controller in the left panel. 2. Click Detect MDisks button to discover MDisks from this controller, as shown in Figure 10-24.

Figure 10-24 Detect MDisks action

3. The Discover devices task runs. 4. When the task is completed, click Close and see the new MDisks available.

10.3 Working with Storage Pools


In this section we describe the tasks that can be performed with the Storage Pools. From the Welcome panel that is shown in Figure 10-1 on page 632, select Physical Storage then Pools.

Chapter 10. SAN Volume Controller operations using the GUI

643

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

10.3.1 Viewing Storage Pool information


We perform each of the following tasks from the Pools panel (Figure 10-25 on page 644). To access this panel, from the SVC Welcome panel, click Pools and then click Volumes by Pools.

Figure 10-25 Viewing Storage Pools panel

You can add information (new columns) to the table, as explained in Table information on page 637. To retrieve more detailed information about a specific Storage Pool, select any Storage Pool in the left column. The top right corner of the panel, shown in Figure 10-26, contains the following information about this pool: Status Number of MDisks Number of volumes copies If Easy Tiering is active on this pool Volume Allocation Used Capacity Capacity

Figure 10-26 Detailed information about a pool

Change the view to Pools by Mdisks. Select the Pool that you want to work with, and click on the + (expand button). This panel displays the MDisks that are present in this Storage Pool, as shown in Figure 10-27. 644
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-27 MDisks presents in a Storage Pool

10.3.2 Discovering MDisks


Perform the following steps to discover newly assigned MDisks: 1. From the SVC Welcome panel (Figure 10-1 on page 632) click Pools, and then click Mdisks by Pools. 2. Click Detect MDisks, as shown in Figure 10-28.

Figure 10-28 Detect MDisks action

3. The Discover Device window is displayed. 4. Click Close to see the newly discovered MDisks.

10.3.3 Creating Storage Pools


Perform the following steps to create a Storage Pool: 1. From the SVC Welcome panel (Figure 10-1 on page 632), click Pools and then click Mdisks by Pools. The Mdisks by Pools Mdiskpanel opens. On this page click New Pool, as shown in Figure 10-29.

Figure 10-29 Selecting the option to create a Storage Pool

2. The wizard Create Storage Pools opens. 3. On this first page, complete the following elements as shown in Figure 10-30 on page 646: a. You can specify a name for the Storage Pool as we have in Figure 10-30 on page 646. If you do not provide a name, the SVC automatically generates the name mdiskgrpx, where x is the ID sequence number that is assigned by the SVC internally.
Chapter 10. SAN Volume Controller operations using the GUI

645

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Storage Pool name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The name can be between one and 63 characters in length and is case sensitive, but it cannot start with a number or the word MDiskgrp because this prefix is reserved for SVC assignment only. b. You can also change the icon associated with this Storage Pool as shown in Figure 10-30 on page 646. c. If you expand the Advanced Settings box, you can specify: The Extent Size (by default at 256 MB) The Warning threshold to send a warning to the event log when the capacity is first exceeded (by default at 80%).

d. Click Next.

Figure 10-30 Create Storage Pool window: Step 1 of 2

4. On this page (Figure 10-31), you are able to detect new MDisks by using Detect MDisks. For more information about this topic, see 10.4.3, Discovering MDisks on page 653. a. Select the MDisks that you want to add to this Storage Pool. Tip: To add multiple MDisks, hold down Ctrl and use your mouse to select the entries you want to add. b. Click Finish to complete the creation.

646

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-31 Create Storage Pool window: Step 2 of 2

5. In the Storage Pools panel (Figure 10-32 on page 647), the new Storage Pool is displayed.

Figure 10-32 A new Storage Pool was added successfully

At this point, you have completed the tasks that are required to create a Storage Pool.

Chapter 10. SAN Volume Controller operations using the GUI

647

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

10.3.4 Renaming a Storage Pool


To rename a Storage Pool, perform the following steps: 1. In the left panel, select the Storage Pool that you want to rename, then click Actions -> Rename it as shown in Figure 10-33.

Figure 10-33 Renaming a Storage Pool

2. Type the new name that you want to assign to the Storage Pool and press Enter (Figure 10-34).

Figure 10-34 Changing the name for a Storage Pool

Storage Pools name: The name can consist of the letters A to Z and a to z, the numbers 0 to 9, the dash (-), and the underscore (_) character. The name can be between one and 63 characters in length. However, the name cannot start with a number, the dash or the underscore.

3. A task is launched to change the name of this pool. When it is completed, you can close this window. 4. From the Storage Pools panel, the new Storage Pool name is displayed.

648

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

10.3.5 Deleting a Storage Pool


To delete a Storage Pool, perform the following steps: 1. Select the Storage Pool that you want to delete and then click Delete Pool in the Actions menu (Figure 10-35).

Figure 10-35 Delete Pool menu

2. In the Delete Pool window, click Delete to confirm that you want to delete the Storage Pool (Figure 10-36 on page 649). If there are MDisks and volumes within the Storage Pool that you are deleting, you must select the Delete all volumes, host mappings, and MDisks that are associated with this pool. option.

Figure 10-36 Deleting a pool

Chapter 10. SAN Volume Controller operations using the GUI

649

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Attention: If you delete a Storage Pool by using the Delete all volumes, host mappings, and MDisks that are associated with this pool option, and volumes were associated with that Storage Pool, you will lose the data on your volumes because they are deleted before the Storage Pool. If you want to save your data, then migrate or mirror the volumes to another Storage Pool before you delete the Storage Pool previously assigned to the volumes.

10.3.6 Adding or removing MDisks from a Storage Pool


For information about adding MDisks to a Storage Pool, see 10.4.4, Adding MDisks to a Storage Pool on page 654. For information about removing MDisks from a Storage Pool, see 10.4.5, Removing MDisks from a Storage Pool on page 655.

10.3.7 Showing the volumes that are associated with a Storage Pool
To show the volumes that are associated with a Storage Pool, click volumes and then click volumes by Pool. For more information about this feature see 10.7, Working with volumes on page 679.

10.4 Working with managed disks


This section describes the various configuration and administration tasks that you can perform on the managed disks (MDisks) within the SVC environment.

10.4.1 MDisk information


From the SVC Welcome panel, click Pools MDisks by Pools. The MDisks panel opens as shown in Figure 10-37 on page 650. Click on the + Button (expand button) for one or more Pools to see the MDisks that belongs to a pool.

Figure 10-37 Viewing Managed Disks panel

To retrieve more detailed information about a specific MDisk, perform the following steps: 1. In the MDisks panel, from the expanded view of an Pool (Figure 10-37), right-click an MDisk. 2. As shown in Figure 10-38, click Properties 3. Alternate you can select Actions from the Menu on top of the Mdisks by Pool view, and select Properties for the selected Mdisk.

650

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-38 MDisks menu

4. For the selected MDisk, an overview is displayed showing its various parameters and dependent volumes; see Figure 10-39 on page 651.

Note: To obtain all information about the MDisk, select Show Details as shown in Figure 10-39.

Figure 10-39 MDisk Details page Chapter 10. SAN Volume Controller operations using the GUI

651

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

5. Clicking Dependent Volumes displays information about volumes that reside on this MDisk, as shown in Figure 10-40. The volume panel is discussed in more detail in 10.7, Working with volumes on page 679.

Figure 10-40 Dependent volumes for an MDisk

6. Click Close to return to the previous window.

10.4.2 Renaming an MDisk


Perform the following steps to rename an MDisk that is controlled by the SVC cluster: 1. Select the MDisk that you want to rename in the panel shown in Figure 10-37 on page 650. 2. Click on the Actions menu and select Rename (Figure 10-41). 3. You can select multiple Mdisks to rename by using the CTRL Key while selecting the Mdisks that you want to rename.

652

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-41 Rename Action

Note: You can also right-click this MDisk as shown in Figure 10-38 on page 651 and select Rename from the list. 4. In the Rename MDisk window (Figure 10-42), type the new name that you want to assign to the MDisk and click Rename.

Figure 10-42 Renaming an MDisk

MDisk name: The name can consist of the letters A to Z and a to z, the numbers 0 to 9, the dash (-), and the underscore (_) character. The name can be between one and 63 characters in length.

10.4.3 Discovering MDisks


Perform the following steps to discover newly assigned MDisks: 1. In the menu, select Pools MDisks by Pool. 2. Click Detect MDisks, as shown in Figure 10-43 on page 654.

Chapter 10. SAN Volume Controller operations using the GUI

653

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-43 Detect MDisks action

The Discover Device window is displayed. 3. When the task is completed, click Close. 4. Newly assigned MDisks are displayed in the Not in a Pool as Unmanaged. See Figure 10-44.

Figure 10-44 mdisk12 & mdisk13: Newly discovered managed disks

Troubleshooting: If your MDisks are still not visible, check that the logical unit numbers (LUNs) from your subsystem are properly assigned to the SVC (for example, using storage partitioning with a DS5000) and that appropriate zoning is in place (for example, the SVC can see the disk subsystem).

10.4.4 Adding MDisks to a Storage Pool


If you created an empty Storage Pool or you simply assign additional MDisks to your SVC environment later, you can add MDisks to existing Storage Pools by performing the following steps: Note: You can only add unmanaged MDisks to a Storage Pool. 1. Select the unmanaged MDisk that you want to add to a Storage Pool. 2. Click Add to Pool in the Actions menu (Figure 10-45 on page 654).

Figure 10-45 Actions: Add to Pool

654

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Note: You can also access the Add to Pool action by right-clicking an unmanaged MDisk. 3. From the Add MDisk to Pool window, select in which pool you want to integrate this MDisk and then click Add to Pool, as shown in Figure 10-46.

Figure 10-46 Adding an MDisk to an existing Storage Pool

10.4.5 Removing MDisks from a Storage Pool


To remove an MDisk from a Storage Pool, perform the following steps: 1. Select the MDisk that you want to remove from a Storage Pool. 2. Click Remove from Pool in the Actions menu (Figure 10-47 on page 655).

Figure 10-47 Actions: Remove from Pool

Note: You can also access the Remove from Pool action by right-clicking an unmanaged MDisk.

Chapter 10. SAN Volume Controller operations using the GUI

655

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

3. From the Remove from Pool window (Figure 10-48), you need to validate the number of MDisks that you want to remove from this pool. This verification has been added to secure the process of deleting data. If volumes are using the MDisks that you are removing from the Storage Pool, you must select the option Remove the MDisk from the storage pool even if it has data on it. The system migrates the data to other MDisks in the pool. to confirm the removal of the MDisk. 4. Click Delete as shown in Figure 10-48.

Figure 10-48 Deleting an MDisk to an existing Storage Pool

An error message is displayed, as shown in Figure 10-49 on page 656, if there is insufficient space to migrate the volume data to other extents on other MDisks in that Storage Pool.

Figure 10-49 Remove MDisk error message

10.4.6 Including an excluded MDisk


If a significant number of errors occur on an MDisk, the SVC automatically excludes it. These errors can result from a hardware problem, a storage area network (SAN) zoning problem, or the result of poorly planned maintenance. If it is a hardware fault, you will receive Simple 656
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Network Management Protocol (SNMP) alerts in regard to the state of the hardware (before the disk was excluded) and preventive maintenance that has been undertaken. If not, the hosts that were using volumes, which used the excluded MDisk, now have I/O errors. After you take the necessary corrective action to repair the MDisk (for example, replace the failed disk and repair the SAN zones), you can tell the SVC to include the MDisk again. Perform the following steps to include an excluded MDisk: 1. From the SVC Welcome panel, click Physical Storage in the left menu, and then click the MDisks panel. 2. Select the MDisk that you want to include again. 3. Click Include Excluded MDisk in the Actions menu. Note: You can also include an excluded MDisk by right-clicking an MDisk and selecting Include Excluded MDisk from the list.

10.4.7 Activating EasyTier


To activate Easy Tier you need to have a true multidisk tier pool with generic hdd and ssd drives. MDisks, after they are detected, have a default disk tier of generic_hdd (shown as Hard Disk Drive in Figure 10-50 on page 657).

Figure 10-50 Default disk tier

Note: For more detailed information about Easy Tier, see Chapter 7, Easy Tier on page 349. Easy Tier is also still inactive (Figure 10-50) for the storage pool because we do not yet have a true multidisk tier pool. To activate the pool we have to set the SSD MDisks to their correct generic_ssd tier. To set an MDisk as ssd on a Storage Pool, perform the following steps: Note: Repeat this action for each of your ssd MDisks. 1. Select the MDisk. 2. Click Select Tier in the Actions menu as shown in Figure 10-51. Note: You can also access the Select Tier action by right-clicking an MDisk.

Chapter 10. SAN Volume Controller operations using the GUI

657

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-51 Select Tier menu

3. In the Select MDisk Tier window, shown in Figure 10-52 on page 658, select Solid-State Drive using the drop-down list and then click OK.

Figure 10-52 Select MDisk Tier window

4. The Easy Tier is now activated in this multidisk tier pool (Hard Disk Drive and Solid-State Drive) in this pool as shown in Figure 10-53.

Figure 10-53 EasyTier activated on a storage pool

658

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

10.5 Migration
See Chapter 6, Data migration on page 227 for a comprehensive description of data migration.

10.6 Working with hosts


In this section we describe the various configuration and administration tasks that you can perform on the host that is connected to your SVC. Note: For more details about connecting hosts to an SVC in a SAN environment, see Chapter 5, Host configuration on page 149. A host system is a computer that is connected to the SAN Volume Controller through either a Fibre Channel interface or an IP network. A host object is a logical object in the SAN Volume Controller that represents a list of worldwide port names (WWPNs) and a list of iSCSI names that identify the interfaces that the host system uses to communicate with the SAN Volume Controller. iSCSI names can be either iSCSI qualified names (IQNs) or extended unique identifiers (EUIs). A typical configuration has one host object for each host system that is attached to the SAN Volume Controller. If a cluster of hosts accesses the same storage, you can add HBA ports from several hosts to one host object to make a simpler configuration. A host object can have both WWPNs and iSCSI names. There are three ways to visualize and manage your hosts: By using the All Hosts panel, as shown in Figure 10-54

Figure 10-54 All Hosts panel

By using the Ports by Host panel, as shown in Figure 10-55

Chapter 10. SAN Volume Controller operations using the GUI

659

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-55 Ports by Host panel

By using the Host Mapping panel, as shown in Figure 10-56 on page 660

Figure 10-56 Host Mapping panel

By using the Volumes by Hosts panel, as shown in Figure 10-57.

Figure 10-57 Host by Volumes

660

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Important: Several actions on the hosts are specific to the Ports by Host or the Host Mapping panels, but all these actions and others are accessible from the All Hosts panel. For this reason, all actions on hosts will be executed from the All Hosts panel.

10.6.1 Host information


To access the All Hosts panel from the SVC Overview panel on Figure 10-1 on page 632, click Hosts All Hosts (Figure 10-54 on page 659). You can add information (new columns) to the table in the All Hosts panel as shown in Figure 10-54 on page 659; see Table information on page 637. To retrieve more information about a specific Host, perform the following steps: 1. Select a Host in the table. 2. Click Properties in the Actions menu (Figure 10-58).

Figure 10-58 Actions: Host Properties

Note: You can also access the Properties action by right-clicking a host.

3. For a given host in the Overview window you will be presented with information as shown in Figure 10-59.

Figure 10-59 Host Details: Overview

Chapter 10. SAN Volume Controller operations using the GUI

661

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Note: To obtain more information about the hosts select Show Details (Figure 10-59). 4. On the Mapped Volumes tab (Figure 10-60), you will see the volumes that are mapped to this host.

Figure 10-60 Host Details: Mapped volumes

5. The Port Definitions tab (Figure 10-61) displays attachment information such as the worldwide port names (WWPNs) that are defined for this host or the iSCSI qualified name (IQN) that are defined for this host.

662

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-61 Host Details: Port Definitions

When you are finished viewing the details, click Close to return to the previous window.

10.6.2 Creating a host


There are two types of connections to hosts, Fibre Channel (FC) and iSCSI. In this section we detail both these methods. For Fibre Channel hosts, see the steps in Fibre Channel attached hosts. For iSCSI hosts, see the steps in iSCSI-attached hosts on page 666.

Fibre Channel attached hosts


To create a new host that uses the FC connection type, perform the following steps: 1. Go to the All Hosts panel from the SVC Welcome panel on Figure 10-1 on page 632, and then click Hosts All Hosts (Figure 10-54 on page 659). 2. Click New Host as shown in Figure 10-62.

Figure 10-62 New Host action

3. Select Fibre-Channel Host from the two types of connection available (Figure 10-63).

Chapter 10. SAN Volume Controller operations using the GUI

663

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-63 Create Host window

4. In the Creating Hosts window (Figure 10-64 on page 665), type a name for your host (Host Name). Host name: If you do not provide a name, the SVC automatically generates the name hostx (where x is the ID sequence number that is assigned by the SVC internally). If you want to provide a name, you can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The host name can be between one and 63 characters in length. 5. Fibre-Channel Ports Section: Use the drop-down list to select the WWPNs that correspond to your HBA or HBAs and click Add Port to List in the Fibre-Channel Ports window. To add additional ports, repeat this action. Note: If you added a wrong Fibre-Channel port, you can delete it from the list by clicking the red cross. If your WWPNs are not being displayed, click Rescan to rediscover new WWPNs available since the last scan. Note: In certain cases your WWPNs still might not be displayed, even though you are sure that your adapter is functioning (for example, you see the WWPN in the switch name server) and your zones are correctly set up. To rectify this, type the WWPN of your HBA or HBAs into the drop-down list and click Add Port to List. It will be displayed as unverified. 6. Advanced Settings Section: If you need to modify the I/O Group, the Port Mask or the Host Type, you must select Advanced to access these Advanced Settings as shown in Figure 10-64 on page 665. Select one or more I/O groups from which the host can access volumes. By default, all I/O Groups are selected. You can use a port mask to control the node target ports that a host can access. The port mask applies to the logins from the host initiator port that is associated with the host object.

664

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Note: For each login between a host bus adapter (HBA) port and a node port, the node examines the port mask that is associated with the host object for which the HBA is a member and determines if access is allowed or denied. If access is denied, the node responds to SCSI commands as though the HBA port is unknown. Select the Host Type. The default type is Generic. Use generic for all hosts, unless you use Hewlett-Packard UNIX (HP-UX) or Sun. For these, select HP_UX (to have more than eight LUNs supported for HP_UX machines) or TPGS for Sun hosts using MPxIO.

Figure 10-64 Creating a new Fibre Channel connected host

7. Click the Create Host button as shown in Figure 10-64. This action brings you back to the All Hosts panel (Figure 10-65 on page 665) where you can see the newly added FC host.

Figure 10-65 Create host results

Chapter 10. SAN Volume Controller operations using the GUI

665

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

iSCSI-attached hosts
To create a new host that uses the iSCSI connection type, perform the following steps: 1. Go to the All Hosts panel from the SVC Welcome panel on Figure 10-1 on page 632 and click Hosts All Hosts (Figure 10-54 on page 659). 2. Click New Host, as shown in Figure 10-66.

Figure 10-66 New Host action

3. Select iSCSI Host from the two types of connection (Figure 10-67).

Figure 10-67 Create Host window

4. In the Creating Hosts window (Figure 10-68 on page 668), type a name for your host (Host Name). Host name: If you do not provide a name, the SVC automatically generates the name hostx (where x is the ID sequence number that is assigned by the SVC internally). If you want to provide a name, you can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore. The host name can be between one and 63 characters in length. 5. iSCSI ports Section: Enter the iSCSI initiator or IQN as an iSCSI port, and then click Add Port to List. This IQN is obtained from the server and generally has the same purpose as the WWPN. To add additional ports, repeat this action. Note: If you add the wrong iSCSI port, you can delete it from the list by clicking the red cross.

666

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

If needed, select Use CHAP authentication (all ports) and enter the CHAP secret as shown in Figure 10-68 on page 668. The CHAP secret is the authentication method that is used to restrict access for other iSCSI hosts to use the same connection. You can set the CHAP for the whole cluster under cluster properties or for each host definition. The CHAP must be identical on the server and the cluster/host definition. You can create an iSCSI host definition without using a CHAP. 6. Advanced Settings Section: If you need to modify the I/O Group, the Port Mask or the Host Type, you have to select the Advanced button to access these settings as shown in Figure 10-64 on page 665. Select one or more I/O groups from which the host can access volumes. By default, all I/O Groups are selected. You can use a port mask to control the node target ports that a host can access. The port mask applies to the logins from the host initiator port that is associated with the host object. Note: For each login between a host bus adapter (HBA) port and a node port, the node examines the port mask that is associated with the host object for which the HBA is a member and determines if access is allowed or denied. If access is denied, the node responds to SCSI commands as though the HBA port is unknown. Select the Host Type. The default type is Generic. Use generic for all hosts, unless you use Hewlett-Packard UNIX (HP-UX) or Sun. For these, select HP_UX: (to have more than eight LUNs supported for HP_UX machines) or TPGS for Sun hosts using MPxIO.

Chapter 10. SAN Volume Controller operations using the GUI

667

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-68 Creating a new iSCSI host

7. Click Create Host as shown in Figure 10-68. This action brings you back to the All Hosts panel (Figure 10-69) where you can see the newly added iSCSI host.

Figure 10-69 Create host results

10.6.3 Renaming a host


Perform the following steps to rename a host: 1. Select the host that you want to rename in the table. 2. Click Rename in the Actions menu (Figure 10-70 on page 669).

668

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-70 Rename Action

Note: There are two other ways to rename a host. You can right-click a host and select Rename from the list, or use the method described in 10.6.4, Modifying a host on page 669. 3. In the Rename Host window, type the new name that you want to assign and click Rename (Figure 10-71).

Figure 10-71 Renaming a host

Host name: If you do not provide a name, the SVC automatically generates the name hostx (where x is the ID sequence number that is assigned by the SVC internally). If you want to provide a name, you can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The host name can be between one and 63 characters in length.

10.6.4 Modifying a host


To modify a host, perform the following steps: 1. Select the host that you want to modify in the table. 2. Click Properties in the Actions menu (Figure 10-72 on page 669).

Figure 10-72 Host Properties Chapter 10. SAN Volume Controller operations using the GUI

669

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Note: You can also right-click a host and select Properties from the list. 3. In the Overview tab, click Edit to be able to modify parameters for this host. You can modify: The Host Name Host name: If you do not provide a name, the SVC automatically generates the name hostx (where x is the ID sequence number that is assigned by the SVC internally). If you want to provide a name, you can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The host name can be between one and 63 characters in length. The Host Type: The default type is Generic. Use generic for all hosts, unless you use Hewlett-Packard UNIX (HP-UX) or Sun. For these, select HP_UX (to have more than eight LUNs supported for HP_UX machines) or TPGS for Sun hosts using MPxIO. Advanced Settings: If you need to modify the I/O Group, the Port Mask or the iSCSI CHAP Secret (in case you want to convert it to an iSCSI Host), you must select Advanced to access these settings, as shown in Figure 10-73 on page 670.

Figure 10-73 Modifying a host

4. Save the changes by clicking Save. 5. You can close the Host Details window by clicking Close.

10.6.5 Deleting a host


To delete a host, perform the following steps: 1. Select the host or hosts that you want to delete in the table. 670
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

2. Click Delete in the Actions menu (Figure 10-74).

Figure 10-74 Delete Action

Note: You can also right-click a host and select Delete from the list.

3. The Delete Host window opens as shown in Figure 10-75 on page 671. In the field Verify the number of hosts that you are deleting, enter a value matching the correct number of hosts that you want to remove. This verification has been added to secure the process of inadvertently deleting wrong hosts. If you still have volumes associated with the host and if you are sure that you want to delete it even if these volumes are no longer accessible, select the Delete the host even if volumes are mapped to them. These volumes will no longer be accessible to the hosts. option. 4. Click Delete to complete the operation (Figure 10-75).

Figure 10-75 Deleting a host

10.6.6 Adding ports


If you add an HBA or a network interface controller (NIC) to a server that is already defined within the SVC, you can simply add additional ports to your host definition by performing the steps described in this section. Note: A host can have FC and iSCSI port defined, but it is better to avoid using them at the same time.

Chapter 10. SAN Volume Controller operations using the GUI

671

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

To add a port to a host, perform the following steps: 1. Select the host in the table. 2. Click Properties in the Actions menu (Figure 10-72 on page 669).

Figure 10-76 Host Properties

Note: You can also right-click a host and select Properties from the list. 3. On the Properties window, click Port Definitions (Figure 10-77).

Figure 10-77 Port Definitions tab

4. Click Add and select the type of port that you want to add to your host (Fibre Channel Port or iSCSI Port) as shown in Figure 10-78. In this example, we selected a Fibre-Channel Port.

672

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-78 Adding a Fibre Channel or an iSCSI Port action

5. In the Add Fibre-Channel Ports window (Figure 10-79 on page 673), use the drop-down list to select the WWPNs that correspond to your HBA or HBAs and click Add Port to List in the Fibre-Channel Ports window. To add additional ports, repeat this action. Note: If you added the wrong Fibre-Channel port, you can delete it from the list by clicking the red cross. If your WWPNs are not displayed, click Rescan to rediscover any new WWPNs available since the last scan. Note: In certain cases your WWPNs might still not be displayed, even though you are sure your adapter is functioning (for example, you see the WWPN in the switch name server) and your zones are correctly set up. To rectify this, type the WWPN of your HBA or HBAs into the drop-down list and click Add Port to List. It will be displayed as unverified. 6. To finish, click Add Ports to Host.

Figure 10-79 Adding Fibre-Channel Ports

Chapter 10. SAN Volume Controller operations using the GUI

673

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

7. This action takes you back to the Port Definitions window (Figure 10-80), where you can see the newly added ports.

Figure 10-80 Port Definitions tab updated

Note: This action is exactly the same for iSCSI Ports, except that you have to add iSCSI ports.

10.6.7 Deleting ports


To delete a port from a host, perform the following steps: 1. Select the host in the table. 2. Click Properties in the Actions menu (Figure 10-81).

Figure 10-81 Host Properties

Tip: You can also right-click a host and select Properties from the list.

3. On the opened window, click Port Definitions (Figure 10-82).

674

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-82 Port Definitions tab

4. Select the port or ports that you want to remove. 5. Click Delete Port (Figure 10-83).

Figure 10-83 Port Definitions tab: Delete port

6. In the Delete Port window (Figure 10-84), in the field Verify the number of ports to delete, you need to enter a value matching the correct number of ports that you want to

Chapter 10. SAN Volume Controller operations using the GUI

675

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

remove. This verification has been added to secure the process of inadvertently deleting the wrong hosts.

Figure 10-84 Delete Port window

7. Click Delete to remove the port or ports. 8. This action brings you back to the Port Definitions window.

10.6.8 Creating or modifying the host mapping


To modify the host Mapping, perform the following steps: 1. Select the host in the table. 2. Click Modify Mappings in the Actions menu (Figure 10-85 on page 676). Tip: You can also right-click a host and select Modify Mappings from the list.

Figure 10-85 Modify Mappings Action

3. On the Modify Mappings window select the volume or volumes that you want to map to this host and move each of them to the right table using the right arrow, as shown in Figure 10-86. If you need to remove them, use the left arrow.

676

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-86 Modify Mappings window: Adding volumes to a host

In the right table you can edit the SCSI ID by selecting a mapping that is highlighted in yellow, indicating that the mapping is new. Click Edit SCSI ID (Figure 10-86). Note: Only new mappings can have their SCSI ID changed. To edit an existing mapping SCSI ID, you must unmap the volume and recreate the map to the volume. In the Edit SCSI ID window, change the SCSI ID then click OK (Figure 10-87 on page 677).

Figure 10-87 Modify Mappings window: Edit SCSI ID

4. After all the volumes you wanted to map to this host have been added, click OK to create the Host mapping relationships.

Chapter 10. SAN Volume Controller operations using the GUI

677

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

10.6.9 Deleting a host mapping


To delete a host mapping, perform the following steps: 1. Select the host in the table. 2. Click Modify Mappings in the Actions menu (Figure 10-88).

Figure 10-88 Modify Mappings

Tip: You can also right-click a host and select Modify Mappings from the list. 3. Select the host mapping or mappings that you want to remove. 4. Click on the arrow in the middle when you have selected the volumes that you want to remove, and then click on the Apply or Map Volumes button to complete the Modify Mapping actions.(Figure 10-89)

Figure 10-89 Modify Host mappings: Unmap a volume

10.6.10 Deleting all host mappings for a given host


To delete all host mappings for a given host, perform the following steps: 1. Select the host in the table. 2. Click Unmap All volumes in the Actions menu (Figure 10-90).

678

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-90 Unmap All volumes from Actions menu

Tip: You can also right-click a host and select Unmap All volumes from the list.

From the Unmap from Host window (Figure 10-91 on page 679), in the Verify the number of mappings that this operation affects: field, enter a value matching the correct number of ports that you want to remove. This verification has been added to secure the process of inadvertently deleting the wrong hosts.

Figure 10-91 Unmap from Host window

3. Click Unmap to remove the host mapping or mappings. This action brings you back to the All Hosts window.

10.7 Working with volumes


In this section, we describe the tasks that you can perform at a volume level. There are three ways to visualize and manage your volumes: You can use the All volumes panel, as shown in Figure 10-92.

Chapter 10. SAN Volume Controller operations using the GUI

679

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-92 All volumes panel

Or you can use the Volumes by Pool panel, as shown in Figure 10-93 on page 680.

Figure 10-93 Volumes by Pool panel

Or you can use the Volumes by Host panel, as shown in Figure 10-94.

680

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-94 Volumes panel by Host panel

Important: Several actions on the hosts are specific to the Volumes by Pool or to the Volumes by Host panels. However, all these actions and others are accessible from the All volumes panel. All actions in the following sections are executed from the All Volumes panel.

10.7.1 Volume information


To access the All volumes panel from the SVC Welcome panel on Figure 10-1 on page 632, click Volumes All Volumes (Figure 10-92 on page 680). You can add information (new columns) to the table in the All Volumes panel as shown in Figure 10-92 on page 680; see Table information on page 637. To retrieve more information about a specific volume, perform the following steps: 1. Select a volume in the table. 2. Click Properties in the Actions menu (Figure 10-95).

Chapter 10. SAN Volume Controller operations using the GUI

681

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-95 Volume Properties action

Tip: You can also access the Properties action by right-clicking a volume. 3. The Overview tab shows information about a given volume (Figure 10-96).

Figure 10-96 Volume properties: Overview tab

Note: To obtain more information about the volume, select Show Details

682

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

4. The Host Maps tab (Figure 10-97) displays the hosts that are mapped with this volume.

Figure 10-97 Volume properties: Mapped volumes

5. The Member MDisks tab (Figure 10-98 on page 684) displays the used MDisks for this volume. You can perform actions on the MDisks such as removing them from a pool, adding them to a tier, renaming them, showing their dependent volumes, or seeing their properties.

Chapter 10. SAN Volume Controller operations using the GUI

683

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-98 Volume properties: Member MDisks

6. When you have finished viewing the details, click Close to return to the All Volumes panel.

10.7.2 Creating a volume


To create a new volume, perform the following steps: 1. Go to the All Volumes panel from the SVC Welcome panel on Figure 10-1 on page 632, and click Volumes All Volumes. 2. Click New Volume (Figure 10-99).

Figure 10-99 New Volume action

3. Select one of the following presets, as shown in Figure on page 685: Generic: Create volumes that use a set amount of capacity from the selected storage pool. Thin Provision: Create volumes whose capacity is large, but which only use the capacity that is written by the host application from the pool. Mirror: Create volumes with two physical copies that provide data protect. Each copy can belong to a different storage pool to protect data from storage failures. 684
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Thin Mirror: Create volumes with two physical copies to protect data from failures while using only the capacity that is written by the host application. Note: For our example we chose the Generic preset. However, whatever the selected preset is, you have the opportunity afterwards to reconsider your decision by customizing the volume using the Advanced... button. 4. After selecting a preset, in our example Generic, you must select the Storage Pool on which the data will be striped (Figure 10-100).

Figure 10-100 Select the Storage Pool

5. After the Storage Pool has been selected, the window will be updated automatically and you will have to select a volume name and size as shown in Figure 10-101 on page 686. Enter a name if you want to create a single volume, or a naming prefix if you want to create multiple volumes. Volume name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The host name can be between one and 63 characters in length. Enter the size of the volume that you want to create and select the capacity measurement (bytes, KB, MB, GB or TB) from the list. Note: An entry of 1 GB uses 1024 MB. An updated summary automatically appears in the bottom of the window to give you an idea of the space that will be used and that is remaining in the pool.

Chapter 10. SAN Volume Controller operations using the GUI

685

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-101 New volume: Select Name and Size

Various optional actions are available from this window: You can modify the Storage Pool by clicking Edit. In this case, you can select another storage pool. You can create additional volumes by clicking the button. This action can be repeated as many times as necessary. You can remove them by clicking the button. Note: When you create more than one volume, the wizard does not ask you for a name for each volume to be created. Instead, the name that you use here will become the prefix and have a number, starting at zero, appended to it as each volume is created. 6. You can activate and customize advanced features such as thin-provisioning or mirroring, depending on the preset you selected. To access these settings, click Advanced...: On the Characteristics tab (Figure 10-102 on page 687), you can set the following options: General: Format the new volume by selecting the Format Before Use check box (formatting writes zeros to the volume before it can be used; that is, it will write zeros to its MDisk extents). Locality: Choose an I/O Group and then select a preferred node. OpenVMS only: Enter the UDID (OpenVMS). This field needs to be completed only for OpenVMS system. Note: Each OpenVMS fibre-attached volume requires a user-defined identifier or unit device identifier (UDID). A UDID is a nonnegative integer that is used when an OpenVMS device name is created. To recognize volumes, OpenVMS issues a UDID value, which is a unique numerical number.

686

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-102 Advanced Settings: Characteristics

On the Thin Provisioning tab (Figure 10-103 on page 688), after you activate thin provisioning by selecting the Thin provisioning check box, you can set the following options: Real: Type the Real size that you want to allocate. This size is the amount of disk space that will actually be allocated. It can either be a percentage of the virtual size or a specific number in GB. Automatically Expand: Select auto expand, which allows the real disk size to grow as required. Warning Threshold: Type a percentage or select a specific size for the usage threshold warning. It will generate a warning when the used disk capacity on the space-efficient copy first exceeds the specified threshold. Thin-Provisioned Grain size: Select the Grain size (32 KB, 64 KB, 128 KB or 256 KB). Smaller grain sizes save space and larger grain sizes produce better performance. Try to match the FlashCopy grain size if the volume will be used for FlashCopy.

Chapter 10. SAN Volume Controller operations using the GUI

687

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-103 Advanced Settings: Thin Provisioning

Important: If the Thin Provision or Thin Mirror preset is selected on the first page (Figure on page 685), the Thin provisioning check box is already selected and the parameter presets are the following: Real: 2% of Virtual Capacity Automatically Expand: Selected Warning Threshold: Selected with a value 80% of Virtual Capacity Thin-Provisioned Grain size: 32 KB On the Mirroring tab (Figure 10-104 on page 689), after you activate mirroring by selecting the Create Mirrored Copy check box, you can set the following option: Mirror Sync Rate: Enter the Mirror Synchronization rate. It is the I/O governing rate in a percentage that determines how quickly copies are synchronized. A zero value disables synchronization. Important: If you activate this feature from the Advanced menu, you will have to select a secondary pool on the main window (Figure 10-101 on page 686). The Primary Pool is going to be used as the primary and preferred copy for read operations. The secondary pool will be used as the secondary copy.

688

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-104 Advanced Settings: Mirroring

Important: If the Mirror or Thin Mirror preset is selected on the first page (Figure on page 685), the Mirroring check box is already selected and the parameter preset is the following: Mirror Sync Rate: 80% of Maximum 7. After all the advanced settings have been set, click OK to return to the main menu (Figure 10-101 on page 686). 8. Then, you have the choice to only create the volume using the Create button, or to create and map it using the Create and Map to Host button. If you select to only create the volume, you will return to the main All Volumes panel and you will see your volume created but not mapped (Figure 10-105). You can map it later.

Figure 10-105 Volume created without mapping

If you want to create and map it on the volume creation window, click the Continue button and another window opens. In the Modify Mappings window, select on which host you want to map this volume by using the drop-down button and then clicking Next (Figure 10-106 on page 690).

Chapter 10. SAN Volume Controller operations using the GUI

689

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-106 Select the host to which to map your volume

In the Modify Mappings window, verify the mapping. If you want to modify it, select the volume or volumes that you want to map to a host and move each of them to the right table using the right arrow, as shown in Figure 10-107. If you need to remove them, use the left arrow.

Figure 10-107 Modify Mappings window: Adding volumes to a host

In the right table, you can edit the SCSI ID by selecting a mapping that is highlighted in yellow, indicating that the mapping is new. Next, click Edit SCSI ID (shown in Figure 10-86 on page 677). Note: Only new mappings can have their SCSI ID changed. To edit an existing mapping SCSI ID, you must unmap the volume and recreate the map to the volume. In the Edit SCSI ID window, change the SCSI ID then click OK (Figure 10-108 on page 691).

690

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-108 Modify Mappings window: Edit SCSI ID

After all volumes that you wanted to map to this host have been added, click OK to create the Host mapping relationships and finalize the volume creation. You will return to the main All Volume window and see your volume created and mapped as shown in Figure 10-109.

Figure 10-109 Volume created with mapping

10.7.3 Renaming a volume


Perform the following steps to rename a volume: 1. Select the volume that you want to rename in the table. 2. Click Rename in the Actions menu (Figure 10-110 on page 692).

Chapter 10. SAN Volume Controller operations using the GUI

691

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-110 Rename Action

Tip: There are two other ways to rename a volume. You can right-click a volume and select Rename from the list, or you can use the method explained in Figure 10.7.4 on page 692.

3. In the Rename Volume window, type the new name that you want to assign to the volume, and click OK (Figure 10-111).

Figure 10-111 Renaming a volume

Volume name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The volume name can be between one and 63 characters in length.

10.7.4 Modifying a volume


To modify a volume, perform the following steps: 1. Select the volume that you want to modify in the table. 2. Click Properties in the Actions menu (Figure 10-112 on page 693).

692

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-112 Properties action

Tip: You can also right-click a volume and select Properties from the list. 3. In the Overview tab, click Edit to modify parameters for this volume (Figure 10-113 on page 694). From this window, you can modify the following parameters: Volume Name: You can modify the volume name. Volume name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore. The host name can be between one and 63 characters in length. I/O Group: You can select an alternate I/O Group from the list to alter the I/O Group to which it is assigned. You can also select the Force check box. This option changes the I/O group when the cache state is either Not Empty or corrupts and stops synchronization for mirrored volumes.

Preferred node: You can change the preferred node for this volume. Hosts try to access the volume through the preferred node. By default, the system automatically balances the load between nodes. Mirror Sync Rate: Change the Mirror Sync rate. It is the I/O governing rate in a percentage that determines how quickly copies are synchronized. A zero value disables synchronization. Cache Mode: By uncloaking the check box, the SVC cache is disabled (read/write cache is disabled) OpenVMS: Enter the UDID (OpenVMS). This field needs to be completed only for an OpenVMS system.

Chapter 10. SAN Volume Controller operations using the GUI

693

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Note: Each OpenVMS fibre-attached volume requires a user-defined identifier or unit device identifier (UDID). A UDID is a nonnegative integer that is used when an OpenVMS device name is created. To recognize volumes, OpenVMS issues a UDID value, which is a unique numerical you will number.

Figure 10-113 Modify a volume

4. Save the changes by clicking Save. 5. You can close the Host Details window by clicking Close.

10.7.5 Modifying thin-provisioning volume properties


For thin-provisioned volumes, in addition to the properties that you can modify by following the instructions in Figure 10.7.4 on page 692, there are other properties specific to thin provisioning that you can modify by performing the following steps: 1. Depending on the case, use one of the following actions: For a non-mirrored volume: Select the volume and in the Actions menu, click Volume Copy Actions Thin Provisioned Edit Properties as shown in Figure 10-114.

694

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-114 Non-mirrored volume: Thin-provisioned properties action menu

Figure 10-115 Non-mirrored volume: Thin-provisioned properties action menu

Tip: You can also right-click the volume and select Volume Copy Actions Thin Provisioned Edit Properties from the list. For a mirrored volume: Select the thin-provisioned copy of the mirrored volume that you want to modify. In the Actions menu, click Thin Provisioned Edit Properties as shown in Figure 10-116.

Chapter 10. SAN Volume Controller operations using the GUI

695

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-116 Mirrored volume: Thin-provisioned properties action menu

Tip: You can also right-click the thin provisioned copy and select Thin Provisioned Edit Properties from the list.

2. The Edit Properties: volumename (where volumename is the volume that you selected in the previous step) window opens (Figure 10-117). From this window, you are able to modify: Warning Threshold: Type a percentage. It will generate a warning when the used disk capacity on the thin-provisioned copy first exceeds the specified threshold. Automatically Expand: Autoexpand allows the real disk size to grow as required automatically.

Figure 10-117 Edit thin-provisioning properties window

Note: You can modify the real size of your thin-provisioned volume by using the GUI. Refer to 10.7.12, Shrinking the real capacity of a thin-provisioned volume on page 709 or 10.7.13, Expanding the real capacity of a thin provisioned volume on page 712, depending on your needs.

696

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

10.7.6 Deleting a volume


To delete a volume, perform the following steps: 1. Select the volume or volumes that you want to delete in the table. 2. Click Delete in the Actions menu (Figure 10-118).

Figure 10-118 Delete Action

Tip: You can also right-click a volume and select Delete from the list.

3. The Delete Volume window opens as shown in Figure 10-119 on page 698. In the field Verify the number of volumes that you are deleting, enter a value matching the correct number of volumes that you want to remove. This verification has been added to secure the process of deleting wrong volumes. Important: Deleting a volume is a destructive action for user data residing in that volume. If you still have a volume (or volumes) associated with a host (or hosts) used with FlashCopy or remote copy, and you definitely want to delete the volume (or volumes), select the Delete the volume even if it has host mappings or is used in FlashCopy mappings or remote-copy relationships. option. Click Delete to complete the operation (Figure 10-119 on page 698).

Chapter 10. SAN Volume Controller operations using the GUI

697

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-119 Delete Volume

10.7.7 Creating or modifying the host mapping


To create or modify a host mapping, perform the following steps: 1. Select the volume in the table. 2. Click Map to Host in the Actions menu (Figure 10-120). Tip: You can also right-click a volume and select Map to Host from the list.

Figure 10-120 Map to Host action

3. On the Modify Mappings window, select the host on which you want to map this volume using the drop-down button and then click Next (Figure 10-106 on page 690).

698

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-121 Select the host to which you want to map your volume

4. On the Modify Mappings window, verify the mapping. If you want to modify it, select the volume or volumes that you want to map to a host and move each of them to the right table using the right arrow as shown in Figure 10-122. If you need to remove them, use the left arrow.

Figure 10-122 Modify Mappings window: Adding volumes to a host

In the right table, you can edit the SCSI ID. Select a mapping that is highlighted in yellow, which indicates that the mapping is new, and click Edit SCSI ID (shown in Figure 10-86 on page 677). Note: Only new mappings can have their SCSI ID changed. To edit an existing mapping SCSI ID, you must unmap the volume and recreate the map to the volume. In the Edit SCSI ID window, change the SCSI ID then click OK (Figure 10-123 on page 700).

Chapter 10. SAN Volume Controller operations using the GUI

699

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-123 Modify Mappings window: Edit SCSI ID

5. After all the volumes you want to map to this host have been added, click OK. You will return to the main All Volumes panel.

10.7.8 Deleting a host mapping


Note: Before deleting a host mapping, make sure that the host is no longer using that disk. Unmapping a disk from a host does not destroy the disks contents. Unmapping a disk has the same effect as powering off the computer without first performing a clean shutdown and, thus, might leave the data in an inconsistent state. Also, any running application that was using the disk will begin to receive I/O errors. To delete a host mapping to a volume, perform the following steps: 1. Select the volume in the table. 2. Click Properties in the Actions menu (Figure 10-124 on page 701).

700

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-124 Volume Properties

Tip: You can also right-click a volume and select Properties from the list. 3. On the Properties window, click the Host Maps tab (Figure 10-125).

Chapter 10. SAN Volume Controller operations using the GUI

701

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-125 Host Maps window

Note: You can also access this window by selecting the volume in the table and clicking View Mapped Hosts in the Actions menu (Figure 10-126).

Figure 10-126 View Mapped Hosts

702

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

4. Select the host mapping or mappings that you want to remove. 5. Click Unmap from Host (Figure 10-127).

Figure 10-127 Host Maps window: Unmap from Host action

In the Unmap Host window (Figure 10-128 on page 703), in the field Verify the number of hosts that this operation affects: enter a value matching the correct number of ports that you want to remove. This verification has been added to secure the process of deleting wrong hosts.

Figure 10-128 Unmap Host

6. Click Unmap to remove the host mapping or mappings. This action returns you to the Host Maps window. 7. Click Close to return to the main All Volumes panel.

Chapter 10. SAN Volume Controller operations using the GUI

703

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

10.7.9 Deleting all host mappings for a given volume


To delete all host mappings for a given host, perform the following steps: 1. Select the volume in the table. 2. Click Unmap All volumes in the Actions menu (Figure 10-129).

Figure 10-129 Unmap All Hosts from Actions menu

Tip: You can also right-click a volume and select Unmap All Hosts from the list.

3. In the Unmap from Hosts window (Figure 10-130), in the field Verify the number of mappings that this operation affects: enter a value matching the correct number of ports that you want to remove. This verification has been added to secure the process of deleting wrong hosts.

704

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-130 Unmap from Hosts window

4. Click Unmap to remove the host mapping or mappings. This action returns you to the All Volumes panel.

10.7.10 Shrinking a volume


Important: For thin-provisioned volumes, using this method to shrink a volume results in shrinking its virtual capacity. To shrink its real capacity, refer to the information provided in 10.7.12, Shrinking the real capacity of a thin-provisioned volume on page 709. The method that the SVC uses to shrink a volume is to remove the required number of extents from the end of the volume. Depending on where the data actually resides on the volume, this action can be quite destructive. For example, you might have a volume that consists of 128 extents (0 to 127) of 16 MB (2 GB capacity), and you want to decrease the capacity to 64 extents (1 GB capacity). In this case, the SVC simply removes extents 64 to 127. Depending on the operating system, there is no easy way to ensure that your data resides entirely on extents 0 through 63, so be aware that you might lose data. Although easily done using the SVC, you must ensure that your operating system supports shrinking, either natively or by using third-party tools, before using this function. In addition, it is good practice to always have a good current backup before you execute this task. Shrinking a volume is useful in certain circumstances, such as: Reducing the size of a candidate target volume of a copy relationship to make it the same size as the source Releasing space from volumes to have free extents in the Storage Pool, provided that you do not use that space any more and take precautions with the remaining data Assuming your operating system supports it, perform the following steps to shrink a volume:

Chapter 10. SAN Volume Controller operations using the GUI

705

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

1. Perform any necessary steps on your host to ensure that you are not using the space that you are about to remove. 2. Select the volume that you want to shrink in the table. 3. Click Shrink in the Actions menu (Figure 10-131).

Figure 10-131 Shrink Action

Tip: You can also right-click a volume and select Shrink from the list.

4. The Shrink Volume: volumename window (where volumename is the volume that you selected in the previous step) opens. See Figure 10-132 on page 707. You can either enter how much you want to shrink the volume using the field Shrink By or you can directly enter the final size that you want to use for the volume using the field Final Size. The other field will be computed automatically. For example, if you have a 20 GB disk and you want it to become 15 GB, you can specify 5 GB in Shrink By field or you can directly specify 15 GB in Final Size field as shown in Figure 10-132 on page 707. 5. When you are finished, click Shrink as shown in Figure 10-132 on page 707, and the changes become visible on your host.

706

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-132 Shrinking a volume

10.7.11 Expanding a volume


Important: For thin-provisioned volumes, using this method results in expanding its virtual capacity. If you want to expand its real capacity, see 10.7.13, Expanding the real capacity of a thin provisioned volume on page 712. Expanding a volume presents a larger capacity disk to your operating system. Although you can expand a volume easily using the SVC, you must ensure that your operating system is prepared for it and supports the volume expansion before you use this function. Dynamic expansion of a volume is only supported when the volume is in use by one of the following operating systems: AIX 5L V5.2 and higher Microsoft Windows Server 2000, Windows Server 2003 and Windows Server 2008 for basic disks. Microsoft Windows Server 2000, Windows Server 2003 with a hot fix from Microsoft (Q327020) for dynamic disks, and Windows Server 2008. If your operating system supports it, perform the following steps to expand a volume: 1. Select the volume in the table. 2. Click Expand in the Actions menu (Figure 10-133 on page 708).

Chapter 10. SAN Volume Controller operations using the GUI

707

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-133 Expand Action

Tip: You can also right-click a volume and select Expand from the list.

3. The Expand Volume: volumename window (where volumename is the volume that you selected in the previous step) opens; see Figure 10-134 on page 709. You can either enter how much you want to enlarge the volume by using the field Expand By, or you can directly enter the final size that you want to use for the volume by using the field Final Size. The other field will be computed automatically. For example, if you have a 10 GB disk and you want it to become 20 GB, you can specify 10 GB in the Expand By field or you can directly specify 20 GB in the Final Size field as shown in Figure 10-134 on page 709. Volume expansion notes: No support exists for the expansion of image mode volumes. If there are insufficient extents to expand your volume to the specified size, you receive an error message. If you use volume mirroring, all copies must be synchronized before expanding. 4. When you are finished, click Expand (see Figure 10-134 on page 709).

708

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-134 Expanding a volume

10.7.12 Shrinking the real capacity of a thin-provisioned volume


Important: From a hosts perspective, the virtual capacity shrinkage (see 10.7.10, Shrinking a volume on page 705) of a volume impacts the host access. To determine these impacts, see 10.7.10, Shrinking a volume on page 705. The real capacity shrinkage of a volume, described in this section, is transparent to the hosts. To shrink the real size of a thin-provisioned volume, perform the following steps: 1. Depending on the case, use one of the following actions: For a non-mirrored volume: Select the volume and in the Actions menu, click Volume Copy Actions Thin provisioned Shrink as shown in Figure 10-135.

Figure 10-135 Figure 10-136 on page 710 Non-mirrored volume: Thin provisioned shrink action menu

Chapter 10. SAN Volume Controller operations using the GUI

709

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-136 Non-mirrored volume: Thin provisioned shrink action menu

Tip: You can also right-click the volume and select Volume Copy Actions Thin provisioned Shrink from the list. For a mirrored volume: Select the thin-provisioned copy of the mirrored volume that you want to modify and in the Actions menu, click Thin Provisioned Shrink as shown in Figure 10-137.

710

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-137 Mirrored volume: Thin-provisioned shrink action menu

Tip: You can also right-click the thin provisioned copy and select Thin Provisioned Shrink from the list.

2. The Shrink Volume: volumename window (where volumename is the volume that you selected in the previous step) opens; see Figure 10-138. You can either enter how much you want to shrink the volume by using the field Shrink By, or you can directly enter the final real capacity that you want to use for the volume by using the field Final Real Capacity. The other field will be computed automatically. For example, if you have a current real capacity equal to 118.8 MB and you want a final real size equal to 10 MB, you can specify 108.8 MB in the Shrink By field, or you can directly specify 10 MB in the Final Real Capacity field as shown in Figure 10-138. 3. When you are finished, click Shrink (Figure 10-138) and the changes will become visible on your host.

Figure 10-138 Shrink real capacity window

Chapter 10. SAN Volume Controller operations using the GUI

711

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

10.7.13 Expanding the real capacity of a thin provisioned volume


Important: From a host perspective, the virtual capacity expansion (10.7.11, Expanding a volume on page 707) of a volume impacts the host access. To know these impacts, see 10.7.11, Expanding a volume on page 707. The real capacity expansion of a volume, described in this paragraph, is transparent to the hosts. To expand the real size of a thin-provisioned volume, perform the following steps: 1. Depending on the case, use one of the following actions: For a non-mirrored volume: Select the volume and in the Actions menu, click Volume Copy Actions Thin provisioned Expand (Figure 10-139).

Figure 10-139 Non-mirrored volume: Thin provisioned expand action menu

Tip: You can also right-click the volume and select Volume Copy Actions Thin provisioned Expand from the list.

For a mirrored volume: Select the thin provisioned copy of the mirrored volume that you want to modify and in the Actions menu, then click Thin Provisioned Expand (Figure 10-140).

712

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-140 Mirrored volume: Thin provisioned expand action menu

Tip: You can also right-click the thin provisioned copy and select Thin Provisioned Expand from the list.

2. The Expand Volume: volumename window (where volumename is the volume that you selected in the previous step) opens (Figure 10-141). You can either enter how much you want to expand the volume using the field Expand By, or you can directly enter the final real capacity that you want to use for the volume using the field Final Real Capacity. The other field will be computed automatically. For example, if you have a current real capacity equal to 10 MB and you want a final real size equal to 100 MB, you can specify 90 MB in the Expand By field or you can directly specify 100 MB in the Final Real Capacity field, as shown in Figure 10-141. 3. When you are finished, click Expand (Figure 10-141) and the changes will become visible on your host.

Figure 10-141 Expand real capacity window

10.7.14 Migrating a volume


To migrate a volume, perform the following steps:
Chapter 10. SAN Volume Controller operations using the GUI

713

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

1. Select the volume that you want to migrate in the table. 2. Click Migrate to Another Pool in the Actions menu (Figure 10-142).

Figure 10-142 Migrate to Another Pool action

Tip: You can also right-click a volume and select Migrate to Another Pool from the list. 3. The Migrate Volume Copy window opens (Figure 10-143). Select the Storage Pool to which you want to reassign the volume. You will only be presented with a list of Storage Pools with the same extent size. 4. When you have finished making your selections, click Migrate to begin the migration process.

714

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-143 Migrate Volume Copy window

Important: After a migration starts, you cannot stop it. Migration continues until it is complete unless it is stopped or suspended by an error condition, or the volume that is being migrated is deleted.

5. You can check the migration using the Running Tasks menu (Figure 10-144 on page 715).

Figure 10-144 Long Running Tasks Area

To expand this area, click the icon and then click Migration. Figure 10-145 shows a detailed view of the running tasks.
Chapter 10. SAN Volume Controller operations using the GUI

715

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-145 Long Running Task: Volume migration

6. When the migration is finished, the volume will be part of the new pool.

10.7.15 Adding a mirrored copy to an existing volume


You can add a mirrored copy to an existing volume. This will give you two copies of the underlying disk extents. Tip: You can also create a new mirrored volume by selecting the Mirror or Thin Mirror preset during the volume creation, as shown in Figure on page 685. You can use a volume mirror for any operation for which you can use a volume. It is transparent to higher level operations such as Metro Mirror, Global Mirror, or FlashCopy. Creating a volume mirror from an existing volume is not restricted to the same Storage Pool, so it is an ideal method to use to protect your data from a disk system or an array failure. If one copy of the mirror fails, it provides continuous data access to the other copy. When the failed copy is repaired, the copies automatically resynchronize. You can also use a volume mirror as an alternative migration tool, where you can synchronize the mirror before splitting off the original side of the mirror. The volume stays online, and can be used normally, while the data is being synchronized. The copies can also be separate structures (that is, striped, image, sequential, or space-efficient) and separate extent sizes. To create a mirror copy from within a volume, perform the following steps; 1. Select the volume in the table. 2. In the Actions menu, click Volume Copy Actions Add Mirrored Copy (Figure 10-146).

716

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-146 Add Mirrored Copy actions

Tip: You can also right-click a volume and select Volume Copy Actions and then Add Mirrored Copy from the list. 3. The Add Volume Copy: volumename window (where volumename is the volume that you selected in the previous step) opens (Figure 10-147 on page 718). You can perform the following steps separately or in combination: Select the Storage Pool in which you want to put the copy. To maintain higher availability, choose a separate group. Select the Enable Thin Provisioning check box to make the copy space-efficient. The following parameters are used for this thin-provisioned copy: Real Size: 2% of Virtual Capacity Automatically Expand: Active Warning Threshold: 80% of Virtual Capacity Thin-Provisioned Grain size: 32 KB

Chapter 10. SAN Volume Controller operations using the GUI

717

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Note: Real Size, Auto expand, and Warning Threshold can be changed only after the thin-provisioned volume copy has been added. For information about modifying the real size of your thin-provisioned volume, see 10.7.12, Shrinking the real capacity of a thin-provisioned volume on page 709 and 10.7.13, Expanding the real capacity of a thin provisioned volume on page 712. For information about modifying the Auto expand and Warning Threshold of your thin provisioned volume, see 10.7.5, Modifying thin-provisioning volume properties on page 694. 4. Click Add Copy (Figure 10-147).

Figure 10-147 Add Copy to volume window

5. You can check the migration using the Running Tasks menu (see Figure 10-144 on page 715). To expand this Status Area, click the icon and click Volume Synchronization. Figure 10-148 on page 719 shows a detailed view of the running tasks.

718

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-148 Running Task: Volume Synchronization

Note: You can change the Mirror Sync Rate (the default is 50%) by modifying the volume properties. For more information, see Figure 10.7.4 on page 692. 6. When synchronization is finished, the volume will be part of the new pool (Figure 10-149).

Figure 10-149 Mirrored volume

Note: As shown in Figure 10-149, the primary copy is identified with an asterisk (*). In this example, Copy 0 is the primary copy.

10.7.16 Deleting a mirrored copy from a volume mirror


To remove a volume copy, perform the following steps: 1. Select the volume copy that you want to remove in the table and in the Actions menu, click Delete this Copy (Figure 10-150 on page 720).

Chapter 10. SAN Volume Controller operations using the GUI

719

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-150 Delete this Copy action

Tip: You can also right-click a volume and select Delete this Copy from the list.

2. The Warning window opens (Figure 10-151). Click OK to confirm your choice.

Figure 10-151 Warning window

Note: If you try to remove the primary copy, before it has been synchronized with the other one, you will receive the message: The command failed because the copy specified is the only synchronized copy. You must wait until the end of the synchronization to be able to remove this copy. 3. The copy is now deleted.

10.7.17 Splitting a volume copy


To split off a synchronized volume copy to a new volume, perform the following steps: 1. Select the volume copy that you want to split in the table and in the Actions menu, click Split into New Volume (Figure 10-152 on page 721).

720

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-152 Split into New Volume action

Tip: You can also right-click a volume and select Split into New Volume from the list.

2. The Split Volume Copy window opens (Figure 10-153). In this window, type a name for the new volume. Volume name: If you do not provide a name, the SVC automatically generates the name vdiskx (where x is the ID sequence number that is assigned by the SVC internally). If you want to provide a name, you can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore. The host name can be between one and 63 characters in length. 3. Click Split Volume Copy (Figure 10-153).

Figure 10-153 Split Volume Copy window

4. This new volume is now available to be mapped to a host. Important: After you split a volume mirror, you cannot resynchronize or recombine them. You must create a volume copy from scratch.

Chapter 10. SAN Volume Controller operations using the GUI

721

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

10.7.18 Validating volume copies


To validate the copies of a mirrored volume, perform the following steps: 1. Select a copy of this volume in the table and in the Actions menu, click Validate Volume Copies (Figure 10-154 on page 722).

Figure 10-154 Validate Volume Copies actions

2. The Validate Volume Copies window opens (Figure 10-155). In this window, select one of the following options: Generate Event of differences: Use this option if you only want to verify that the mirrored volume copies are identical. If any difference is found, the command stops and logs an error that includes the logical block address (LBA) and the length of the first difference. You can use this option, starting at a different LBA each time, to count the number of differences on a volume. Overwrite differences: Use this option to overwrite contents from the primary volume copy to the other volume copy. The command corrects any differing sectors by copying the sectors from the primary copy to the copies being compared. Upon completion, the command process logs an event. This indicates the number of differences that were corrected. Use this option if you are sure that either the primary volume copy data is correct, or that your host applications can handle incorrect data. Return Media Error to Host: Use this option to convert sectors on all volumes copies that contain different contents into virtual medium errors. Upon completion, the command logs an event, which indicates the number of differences that were found, the number that were converted into medium errors, and the number that were not converted. Use this option if you are unsure what the correct data is, and you do not want an incorrect version of the data to be used.

722

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-155 Validate Volume Copies

3. Click Validate (Figure 10-155 on page 723). 4. The volume is now checked.

10.7.19 Migrating to a thin-provisioned volume using volume mirroring


To migrate to a thin-provisioned, perform the following steps; 1. Select the volume in the table. 2. In the Actions menu, click Volume Copy Actions Add Mirrored Copy (Figure 10-156).

Figure 10-156 Add Mirrored Copy actions

Chapter 10. SAN Volume Controller operations using the GUI

723

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Tip: You can also right-click a volume and select Volume Copy Actions then Add Mirrored Copy from the list. 3. The Add Volume Copy: volumename window (where volumename is the volume that you selected in the previous step) opens (Figure 10-157 on page 724). You can perform the following steps separately or in combination: Select the Storage Pool in which you want to put the copy. To maintain higher availability, choose a separate group. Select the Enable Thin Provisioning check box to make the copy space-efficient. The following parameters are used for this thin-provisioned copy: Real Size: 2% of Virtual Capacity Automatically Expand: Active Warning Threshold: 80% of Virtual Capacity Thin-Provisioned Grain size: 32 KB Note: Real Size, Auto expand, and Warning Threshold can be changed after the volume copy has been added in the GUI. For the Thin-Provisioned Grain size, you need to use the CLI. 4. Click Add Copy.

Figure 10-157 Add Copy to volume window

5. You can check the migration using the Running Tasks Status Area menu as shown in Figure 10-144 on page 715. To expand this Status Area, click the icon and click Volume Synchronization. Figure 10-158 shows the detailed view of the running tasks. 724
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-158 Running Task: Volume Synchronization

Note: You can change the Mirror Sync Rate (by default at 50%) by modifying the volume properties. For more information, see Figure 10.7.4 on page 692. 6. When the synchronization is finished, select the non thin-provisioned copy that you want to remove in the table and in the Actions menu, click Delete this Copy (Figure 6).

Figure 10-159 Delete this Copy window

Tip: You can also right-click a volume and select Delete this Copy from the list. 7. The Warning window opens (Figure 10-160). Click OK to confirm your choice.

Figure 10-160 Warning window

Note: If you try to remove the primary copy before it has been synchronized with the other one, you will receive the following message: The command failed because the copy specified is the only synchronized copy. You must wait till the end of the synchronization to be able to remove this copy.

Chapter 10. SAN Volume Controller operations using the GUI

725

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

8. When the copy is deleted, your thin-provisioned volume is ready to be used. At this point, you have completed the required tasks to manage volumes within an SVC environment.

10.7.20 Creating a volume in image mode


Refer to Chapter 6, Data migration on page 227 for the steps required to create a volume in image mode.

10.7.21 Migrating a volume to an image mode volume


Refer to Chapter 6, Data migration on page 227 for the steps required to migrate a volume to an image mode volume.

10.7.22 Creating an image mode mirrored volume


Refer to Chapter 6, Data migration on page 227 for the steps required to create an image mode mirrored volume.

10.8 Copy Services: managing FlashCopy


It is often easier to control working with FlashCopy by using the GUI if you have a small number of mappings. When using many mappings, however, use the CLI to execute your commands. Note: See Chapter 8, Advanced Copy Services on page 373 for more information about the functionality of Copy Services in the SVC environment. In this section, we describe the tasks that you can perform at a FlashCopy level. There are three ways to visualize and manage your FlashCopy: By using the FlashCopy panel (Figure 10-161). In its basic mode, the IBM FlashCopy function copies the contents of a source volume to a target volume. Any data that existed on the target volume is lost and is replaced by the copied data.

726

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-161 FlashCopy panel

By using the Consistency Groups panel (Figure 10-162 on page 727). A Consistency Group is a container for mappings. You can add many mappings to a Consistency Group.

Figure 10-162 Consistency Groups panel

By using the FlashCopy Mappings panel (Figure 10-163 on page 728). A FlashCopy mapping defines the relationship between a source volume and a target volume.

Chapter 10. SAN Volume Controller operations using the GUI

727

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-163 FlashCopy Mappings panel

10.8.1 Creating a FlashCopy Mapping


In this section, we create FlashCopy mappings for volumes with their respective targets. To perform this action, follow these steps: 1. From the SVC Welcome panel, click Copy Services FlashCopy. The FlashCopy panel opens (Figure 10-164 on page 728).

Figure 10-164 FlashCopy panel

2. Select the volume that you want to create the FlashCopy relationship for (Figure 10-165).

728

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Note: To create many FlashCopy mappings at one time, select multiple volumes by holding down the Ctrl key and using the mouse to select the entries that you want.

Figure 10-165 FlashCopy mapping: Select the volume (or volumes)

Depending on whether or not you have already created the target volumes for your FlashCopy mappings, there are two options: If you have already created the target volumes, see Using existing target volumes on page 729. If you want SVC to create the target volumes you, see Creating new target volumes on page 734.

Using existing target volumes


1. Click Advanced FlashCopy... and then click Use existing target volumes in the Actions menu (Figure 10-166).

Chapter 10. SAN Volume Controller operations using the GUI

729

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-166 Use existing target volumes action

2. The New FlashCopy Mapping window opens (see Figure 10-167). In this window, you have to create the relationship between the source volume (the disk that is copied) and the target volume (the disk that receives the copy). A mapping can be created between any two volumes in a cluster. Select a volume in the Target Volumes column using the drop-down list for your selected Source Volume, then click Add button (Figure 10-194 on page 748). If you need to create other relations, repeat this action. Important: The source and target volumes must be of equal size. So, for a given source volume, only targets of the appropriate size are visible.

730

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-167 New FlashCopy Mapping

To remove a relation created, use the

button (Figure 10-168 on page 731).

Note: The volumes do not have to be in the same I/O group or storage pool. 3. Click Next after all relationships that you wanted to create are registered (Figure 10-168).

Figure 10-168 New FlashCopy Mapping with relations created

4. On the next window, select one FlashCopy preset. The GUI interface provides three presets (Snapshot, Clone, Backup) to simplify the more common FlashCopy operations (Figure 10-169). The presets and their use cases are described here: Snapshot Clone Backup Create a copy-on-write point-in-time copy with the following parameters: Creates an exact replica of the source volume on a target volume. The copy can be changed without impacting the original volume. Creates a FlashCopy mapping that can be used to recover data or objects if the system experiences data loss. These backups can be copied multiple times from source and target volumes.

Chapter 10. SAN Volume Controller operations using the GUI

731

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-169 New FlashCopy Mapping window

For whichever preset you select, you can customize various advanced options. You access these settings by clicking Advanced Settings (Figure 10-170 on page 733). If you prefer not to customize these settings, go directly to step 5 on page 733. You can customize the following options, as shown in Figure 10-170: Background Copy Rate This determines the priority that is given to the copy process. A faster rate increases the priority of the process, which can affect the performance of other operations. Incremental This copies only the parts of the source or target volumes that have changed since the last copy. Incremental copies reduce the completion time of the copy operation. Note: Even if the type of the FlashCopy mapping is incremental, the first copy process copies all of the data from the source to the target volume. Delete after completion This automatically deletes a FlashCopy mapping after the background copy is completed. Do not use this option when the background copy rate is set to zero (0). Cleaning Rate This minimizes the amount of time that a mapping is in the stopping state. If the mapping has not completed, the target volume is offline while the mapping is stopping.

732

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-170 New FlashCopy Mapping Advanced Settings

5. If you want to include this FlashCopy mapping in a Consistency Group, in the window that shown in Figure 10-171 on page 733, select Yes, add the mappings to a Consistency Group and also select the Consistency Group from the drop-down list.

Figure 10-171 Add the mappings to a Consistency Group

If you do not want to include this FlashCopy mapping in a Consistency Group, select No, do not add the mappings to a Consistency Group (Figure 10-172).

Figure 10-172 Do not add the mappings to a Consistency Group

Chapter 10. SAN Volume Controller operations using the GUI

733

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

6. Then click Finish as shown in Figure 10-171 and Figure 10-172. 7. Check the result of this FlashCopy mapping (Figure 10-173 on page 734). For each FlashCopy mapping relationship created, a mapping name is automatically generated starting with fcmapX, where X is an available number. If needed, you can rename these mappings; see Figure 10.7.4 on page 692, for more information about this topic.

Figure 10-173 Flash Copy Mapping

At this point, the FlashCopy mapping is now ready to be used.

Creating new target volumes


1. If you have not created a target volume for this source volume, click Advanced FlashCopy... then Create new target volumes in the Actions (Figure 10-174). Note: If the target volume does not exist, it will be created with a name based on its source volume and a generated number at the end, for example: source_volume_name_XX, where XX is a number generated dynamically.

734

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-174 Create new target volumes action

2. On the New FlashCopy Mapping window (Figure 10-175 on page 736), you need to select one FlashCopy preset. The GUI interface provides three presets (Snapshot, Clone, Backup) to simplify the more common FlashCopy operations.

The presets and their use cases are described here: Snapshot Clone Backup Create a copy-on-write point-in-time copy with the following parameters: Creates an exact replica of the source volume on a target volume. The copy can be changed without impacting the original volume. Creates a FlashCopy mapping that can be used to recover data or objects if the system experiences data loss. These backups can be copied multiple times from source and target volumes. Figure 10-175

Chapter 10. SAN Volume Controller operations using the GUI

735

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-175 New FlashCopy Mapping window

Whichever preset you select, you can customize various advanced options. To access these settings, click Advanced Settings (Figure 10-176 on page 737). If you prefer not to customize these settings, go directly to step 3 on page 737. You can customize the following options, as shown in Figure 10-176 on page 737: Background Copy Rate This determines the priority that is given to the copy process. A faster rate increases the priority of the process, which can affect performance of other operations. Incremental This copies only the parts of the source or target volumes that have changed since the last copy. Incremental copies reduce the completion time of the copy operation. Note: Even if the type of the FlashCopy mapping is incremental, the first copy process copies all of the data from the source to the target volume. Delete after completion This automatically deletes a FlashCopy mapping after the background copy is completed. Do not be use this option when background copy rate is set to 0. Cleaning Rate This minimizes the amount of time that a mapping is in the stopping state. If the mapping has not completed, the target volume is offline while the mapping is stopping.

736

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-176 New FlashCopy Mapping Advanced Settings

3. If you want to include this FlashCopy mapping in a Consistency Group, in the next window select Yes, add the mappings to a Consistency Group and select the Consistency Group in the drop-down list (Figure 10-177). If you do not want to include this FlashCopy mapping in a Consistency Group, select No, do not add the mappings to a Consistency Group. Choose whichever option you prefer, then click Next (Figure 10-177).

Figure 10-177 Add the mappings to a Consistency Group

4. In the next window (Figure 10-178 on page 738), select the storage pool that is used to automatically create new targets. You can choose to use the same storage pool that is used by the source volume, or you can select it from a list. In that case, select one storage pool and then click Next.

Chapter 10. SAN Volume Controller operations using the GUI

737

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-178 Select the storage pool

5. Select if you want to have a targeted volume using thin provisioning or not. There are three choices available, as shown in Figure 10-179 on page 738: Yes, in which case enter the following parameters: Real: Type the Real size that you want to allocate. This size is the amount of disk space that will actually be allocated. It can either be a percentage of the virtual size or a specific number in GB. Automatically Expand: Select auto expand, which allows the real disk size to grow as required. Warning Threshold: Type a percentage or select a specific size for the usage threshold warning. It will generate a warning when the used disk capacity on the space-efficient copy first exceeds the specified threshold.

No Inherit properties from source volume Click Finish to complete the FlashCopy Mapping operation.

Figure 10-179 Thin provisioning option

738

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

6. Check the result of this FlashCopy mapping, as shown in Figure 10-180. For each FlashCopy mapping relationship created, a mapping name is automatically generated starting with fcmapX where X is an available number. If needed, you can rename these mappings; see Figure 10.7.4 on page 692.

Figure 10-180 FlashCopy mapping

At this point, the FlashCopy mapping is ready to be used. Tip: You can invoke FlashCopy from the SVC GUI, but using the SVC GUI might be impractical if you plan to handle a large number of FlashCopy mappings or Consistency Groups periodically, or at varying times. In such cases, creating a script by using the CLI might be more convenient.

10.8.2 Creating and starting a snapshot preset with a single click


To create and start a snapshot with one click, perform these steps.

Chapter 10. SAN Volume Controller operations using the GUI

739

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Note: The snapshot creates a point-in-time view of production data. The snapshot is not intended to be an independent copy, but instead is used to maintain a view of the production data at the time the snapshot is created. Therefore, the snapshot holds only the data from regions of the production volume that have changed since the snapshot was created. Because the snapshot preset uses thin provisioning, only the capacity that is required for the changes is used. Snapshot preset parameters: No Background Copy Incremental: No Delete after completion: No Cleaning rate: No Target pool is primary copy source pool 1. From the SVC Welcome panel, click Copy Services in the left menu and then, click the FlashCopy panel. 2. Select the volume that you want to snapshot. 3. Click New Snapshot in the Actions menu (Figure 10-181).

Figure 10-181 New Snapshot option

740

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

4. A volume is created as a target volume for this snapshot in the same pool as the source volume. The FlashCopy mapping is created and it is started. You can check the FlashCopy progress in the Progress column or in the Running Tasks column as shown in Figure 10-182 on page 741.

Figure 10-182 Snapshot created and started

10.8.3 Creating and starting a clone preset with a single click


To create and start a clone with one click, perform these steps. Note: The clone preset creates an exact replica of the volume, which can be changed without impacting the original volume. After the copy completes, the mapping that was created by the preset is automatically deleted. Clone preset parameters: Background Copy rate: 50 Incremental: No Delete after completion: Yes Cleaning rate: 50 Target pool is primary copy source pool 1. From the SVC Welcome panel, click Copy Services in the left menu and then click the FlashCopy panel. 2. Select the volume that you want to clone. 3. Click New Clone in the Actions menu (Figure 10-183 on page 742).

Chapter 10. SAN Volume Controller operations using the GUI

741

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-183 New clone option

4. A volume is created as a target volume for this clone in the same pool as the source volume. The FlashCopy mapping is created and started as shown in Figure 10-184. You can check the FlashCopy progress in the Progress column or in the Running Tasks column.

742

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-184 Clone created and started

10.8.4 Creating and starting a backup preset with a single click


To create and start a backup with one click, perform these steps. Note: The backup preset creates a point-in-time replica of the production data. After the copy completes, the backup view can be refreshed from the production data, with minimal copying of data from the production volume to backup volume. Clone preset parameters: Background Copy rate: 50 Incremental: Yes Delete after completion: No Cleaning rate: 50 Target pool is primary copy source pool 1. From the SVC Welcome panel, click Copy Services in the left menu and then click the FlashCopy panel. 2. Select the volume that you want to back up. 3. Click New Backup in the Actions menu (Figure 10-185).

Chapter 10. SAN Volume Controller operations using the GUI

743

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-185 New backup option

4. A volume is created as a target volume for this backup in the same pool as the source volume. The FlashCopy mapping is created and started. You can check the FlashCopy progress in the Progress column or in the Running Tasks column (Figure 10-186 on page 745).

744

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-186 Backup created and started

10.8.5 Creating a FlashCopy Consistency Group


To create a FlashCopy Consistency Group in the SVC GUI, perform these steps: 1. From the SVC Welcome panel, click Copy Services and then click Consistency Groups. The Consistency Groups panel opens (Figure 10-187 on page 746).

Chapter 10. SAN Volume Controller operations using the GUI

745

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-187 Consistency Group panel

2. Click New Consistency Group (Figure 10-188).

Figure 10-188 Create a FlashCopy Consistency Group

3. Enter the desired FlashCopy Consistency Group name and click Create (Figure 10-189).

Figure 10-189 New Consistency Group window

Consistency Group name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The volume name can be between one and 63 characters in length. 4. Figure 10-190 on page 747 shows the result.

746

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-190 View Consistency Group

10.8.6 Creating FlashCopy mappings in a Consistency Group


In this section, we create FlashCopy mappings for volumes with their respective targets. The source and target volumes were created prior to this operation. To perform this action, follow these steps: 1. From the SVC Welcome panel, click Copy Services and then click Consistency Groups. The Consistency Groups panel opens as shown in Figure 10-187 on page 746. 2. Select in which Consistency Group (see Figure 10-191), you want to create the FlashCopy mapping. If you prefer not to create a FlashCopy mapping in a Consistency Group, select Not in a Group in the list.

Figure 10-191 Consistency Group selection

3. If you select a Consistency Group, click New FlashCopy Mapping in the Actions menu (Figure 10-192).

Chapter 10. SAN Volume Controller operations using the GUI

747

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-192 New FlashCopy mapping action for a Consistency Group

If you did not select a Consistency Group, click New FlashCopy Mapping (Figure 10-193). Consistency Groups: If no Consistency Group is defined, the mapping is a stand-alone mapping, and it can be prepared and started without affecting other mappings. All mappings in the same Consistency Group must have the same status to maintain the consistency of the group.

Figure 10-193 New FlashCopy Mapping

4. The New FlashCopy Mapping window opens (Figure 10-194). In this window you must create the relationships between the source volumes (the disks that are copied) and the target volumes (the disks that receive the copy). A mapping can be created between any two volumes in a cluster. Important: The source and target volumes must be of equal size.

Figure 10-194 New FlashCopy Mapping

748

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Note: The volumes do not have to be in the same I/O group or storage pool. 5. Select a volume in the Sources Volumes column using the drop-down list, then select a volume in the Target Volumes column using the drop-down list and click Add as shown in Figure 10-194 on page 748. Repeat this action to create other relationships. To remove a relationship that has been created, use the button.

Important: The source and target volumes must be of equal size. So for a given source volume, only the targets with the appropriate size are area. 6. Click Next after all the relationships that you wanted to create are registered (Figure 10-195).

Figure 10-195 New FlashCopy Mapping with relationships created

7. In the next window, you need to select one FlashCopy preset. The GUI interface provides three presets (Snapshot, Clone, Backup) to simplify the more common FlashCopy operations (Figure 10-196). The presets and their use cases are described here: Snapshot Clone Backup Create a copy-on-write point-in-time copy with the following parameters: This creates an exact replica of the source volume on a target volume. The copy can be changed without impacting the original volume. This creates a FlashCopy mapping that can be used to recover data or objects if the system experiences data loss. These backups can be copied multiple times from source and target volumes.

Figure 10-196 New FlashCopy Mapping window

Chapter 10. SAN Volume Controller operations using the GUI

749

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Whichever preset you select, you can customize various advanced options. To access these settings, click the Advanced Settings button. If you prefer not to customize these settings, go directly to step 8. You can customize the following options as shown in Figure 10-197: Background Copy Rate This determines the priority that is given to the copy process. A faster rate increases the priority of the process, which might affect performance of other operations. Incremental This copies only the parts of the source or target volumes that have changed since the last copy. Incremental copies reduce the completion time of the copy operation. Note: Even if the type of the FlashCopy mapping is incremental, the first copy process copies all of the data from the source to the target volume. Delete after completion This automatically deletes a FlashCopy mapping after the background copy is completed. Do not use this option when background copy rate is set to zero (0). Cleaning Rate This minimizes the amount of time that a mapping is in the stopping state. If the mapping has not completed, the target volume is offline while the mapping is stopping.

Figure 10-197 New FlashCopy Mapping Advanced Settings

8. If you did not create these FlashCopy mappings from a Consistency Group (see step 3 on page 747), you will have to confirm your choice by selecting No, do not add the mappings to a Consistency Group (Figure 10-198 on page 751).

750

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-198 Add the mappings to a Consistency Group window.

9. Click Finish as shown in Figure 10-197 on page 750. 10.Check the result of this FlashCopy mapping in the Consistency Groups window, as shown in Figure 10-199. For each FlashCopy mapping relationship created, a mapping name is automatically generated starting with fcmapX where X is an available number. If needed, you can rename these mappings; see Figure 10.7.4 on page 692.

Figure 10-199 FlashCopy mappings result

Tip: You can invoke FlashCopy from the SVC GUI, but using the SVC GUI might be impractical if you plan to handle a large number of FlashCopy mappings or Consistency Groups periodically, or at varying times. In this case, creating a script by using the CLI might be more convenient.

10.8.7 Show Dependent Mappings


Perform the following steps to show Dependent Mappings for a given FlashCopy mapping: 1. From the SVC Overview panel, click Copy Services in the left menu and then click the FlashCopy, Consistency Groups, or FlashCopy Mappings panel. 2. Select the volume (from the FlashCopy panel only) or the FlashCopy mapping that you want to remove from a Consistency Group. 3. Click Show Dependent Mappings in the Actions menu (Figure 10-200). Tip: You can also right-click a FlashCopy mapping and select Show Dependent Mappings from the list.

Chapter 10. SAN Volume Controller operations using the GUI

751

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-200 Show Dependent Mappings

In the Dependent Mappings window (Figure 10-201), you can see the dependent mapping for a given volume or a FlashCopy mapping. If you click one of these volumes, you can see its properties. For more information about volume properties, see 10.7.1, Volume information on page 681.

Figure 10-201 Dependent Mappings

4. Click Close to close this window.

10.8.8 Moving a FlashCopy mapping to a Consistency Group


Perform the following steps to move a FlashCopy mapping to a Consistency Group: 1. From the SVC Welcome panel, click Copy Services in the left menu and then click the FlashCopy, Consistency Groups, or FlashCopy Mappings panel. 2. Select the FlashCopy mapping that you want to move to Consistency Group or that you want to change the Consistency Group. 3. Click Move to Consistency Group in the Actions menu (Figure 10-202). Tip: You can also right-click a FlashCopy mapping and select Move to Consistency Group from the list.

752

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-202 Move to Consistency Group action

4. In the Move FlashCopy Mapping to Consistency Group window, select the Consistency Group for this FlashCopy mapping using the drop-down list (Figure 10-203):

Figure 10-203 Move a FlashCopy mapping to a Consistency Group

5. Click Move to Consistency Group to confirm your changes.

10.8.9 Removing a FlashCopy mapping from a Consistency Group


Perform the following steps to remove a FlashCopy mapping from a Consistency Group: 1. From the SVC Overview panel, click Copy Services in the left menu and then click the FlashCopy, Consistency Groups, or FlashCopy Mappings panel. 2. Select the FlashCopy mapping that you want to remove from a Consistency Group. 3. Click Remove from Consistency Group in the Actions menu (Figure 10-204 on page 753). Tip: You can also right-click a FlashCopy mapping and select Remove from Consistency Group from the list.

Figure 10-204 Remove from Consistency Group action

Chapter 10. SAN Volume Controller operations using the GUI

753

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

In the Remove FlashCopy Mapping from Consistency Group window, click Remove (Figure 10-205).

Figure 10-205 Remove FlashCopy mapping

10.8.10 Modifying a FlashCopy mapping


Perform the following steps to modify a FlashCopy mapping: 1. From the SVC Welcome panel, click Copy Services in the left menu and then click the FlashCopy, Consistency Groups, or FlashCopy Mappings panel. 2. Select the FlashCopy mapping that you want to modify in the table. 3. Click Edit Properties in the Actions menu (Figure 10-206).

Figure 10-206 Edit properties

Tip: You can also right-click a FlashCopy mapping and select Edit Properties from the list. 4. In the Edit Properties window, you can modify the following parameters for a selected FlashCopy mapping as shown in Figure 10-207: Background Copy Rate This determines the priority that is given to the copy process. A faster rate increases the priority of the process, which might affect performance of other operations. Cleaning Rate This minimizes the amount of time that a mapping is in the stopping state. If the mapping has not completed, the target volume is offline while the mapping is stopping.

754

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-207 Edit FlashCopy Mapping

5. Click Save to confirm your changes.

10.8.11 Renaming a FlashCopy mapping


Perform the following steps to rename a FlashCopy mapping: 1. From the SVC Welcome panel, click Copy Services and then click Consistency Groups or FlashCopy Mappings. 2. Select the FlashCopy mapping that you want to rename in the table. 3. Click Rename in the Actions menu (Figure 10-208). Tip: You can also right-click a FlashCopy mapping and select Rename from the list.

Figure 10-208 Rename Action

4. In the Rename Mapping window, type the new name that you want to assign to the FlashCopy mapping and click Rename (Figure 10-209 on page 756).

Chapter 10. SAN Volume Controller operations using the GUI

755

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-209 Renaming a FlashCopy mapping

FlashCopy name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The mapping name can be between one and 63 characters in length.

10.8.12 Renaming a Consistency Group


To rename a Consistency Group, perform the following steps: 1. From the SVC Overview panel, click Copy Services menu and then click Consistency Group. 2. Select the Consistency Group that you want to rename from the left panel. Then select Rename from the Actions menu to rename it (Figure 10-210).

Figure 10-210 Renaming a Consistency Group

3. Type the new name that you want to assign to the Consistency Group and press Rename (Figure 10-211).

Figure 10-211 Changing the name for a Consistency Group

Consistency Group name: The name can consist of the letters A to Z and a to z, the numbers 0 to 9, the dash (-), and the underscore (_) character. The name can be between one and 63 characters in length. However, the name cannot start with a number, the dash or the underscore.

756

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

4. From the Consistency Group panel, the new Consistency Group name is displayed.

10.8.13 Deleting a FlashCopy mapping


Perform the following steps to delete a FlashCopy mapping: 1. From the SVC Overview panel, click Copy Services and then click the FlashCopy, Consistency Groups or the FlashCopy Mappings panel. 2. Select the FlashCopy mapping that you want to delete in the table. Note: To select multiple FlashCopy mappings, hold down the Ctrl key and use the mouse to select the entries (this is only available in the Consistency Groups and FlashCopy mappings panels). 3. Click Delete Mapping in the Actions menu (Figure 10-212). Tip: You can also right-click a FlashCopy mapping and select Delete Mapping from the list.

Figure 10-212 Delete Mapping action

4. The Delete Mapping window opens as shown in Figure 10-213 on page 758. In the field Verify the number of FlashCopy mappings you are deleting, you need to enter a value matching the correct number of volumes that you want to remove. This verification has been added to secure the process of deleting wrong mappings. If you still have target volumes that are inconsistent with the source volumes and you definitely want to delete these FlashCopy mappings, then select the Delete the FlashCopy mapping even when the data on the target volume is inconsistent with the source volume option. Click Delete to complete the operation (Figure 10-213).

Chapter 10. SAN Volume Controller operations using the GUI

757

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-213 Delete FlashCopy Mapping

10.8.14 Deleting a FlashCopy Consistency Group


Important: Deleting a Consistency Group does not delete the FlashCopy mappings. Perform the following steps to delete a FlashCopy Consistency Group: 1. From the SVC Overview panel, click Copy Services and then click the Consistency Groups panel. 2. Select the FlashCopy Consistency Group that you want to delete. 3. Click Delete in the Actions menu (Figure 10-214 on page 758).

Figure 10-214 Delete action

4. The Warning window opens (Figure 10-215). Click OK to complete the operation.

758

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-215 Warning window

10.8.15 Starting FlashCopy mappings


When the FlashCopy mapping is created, the copy process can be started. Only mappings that are not a member of a Consistency Group, or the only mapping in a Consistency Group, can be started individually. 1. From the SVC Welcome panel, click Copy Services and then click the FlashCopy or the FlashCopy Mappings panel. 2. Select the FlashCopy mapping that you want to start in the table. 3. Click Start in the Actions menu (Figure 10-216 on page 759) to start the FlashCopy Mapping. Tip: You can also right-click a FlashCopy mapping and select Start from the list.

Figure 10-216 Start action

4. You can check the FlashCopy progress in the Progress column of the table or in the Running Tasks section (Figure 10-217).

Figure 10-217 Checking FlashCopy progress

Chapter 10. SAN Volume Controller operations using the GUI

759

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

5. After the task is completed, the FlashCopy status is in a Copied state (Figure 10-218).

Figure 10-218 Copied FlashCopy

10.8.16 Starting a FlashCopy Consistency Group


All of the mappings in a Consistency Group will be brought to the same state. To start the FlashCopy Consistency Group, perform these steps: 1. From the SVC Overview window, click Copy Services and then click the Consistency Groups panel. 2. Select the Consistency Group that you want to start (Figure 10-219).

Figure 10-219 FlashCopy Consistency Groups window

3. Click Start in the Actions menu (Figure 10-220) to start the FlashCopy Consistency Group.

Figure 10-220 Start action

760

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

4. You can check the FlashCopy Consistency Group progress in the Progress column or in the Running Tasks section (Figure 10-221 on page 761).

Figure 10-221 Checking FlashCopy Consistency Group progress

5. After the task is completed, the FlashCopy status is in a Copied state (Figure 10-222).

Figure 10-222 Copied FlashCopy Consistency Group

10.8.17 Stopping the FlashCopy Consistency Group


When a FlashCopy Consistency Group is stopped, the target volumes become invalid and are set offline by the SVC. The FlashCopy mapping or Consistency Group must be prepared again or retriggered to bring the target volumes online again. Important: Only stop a FlashCopy mapping when the data on the target volume is useless, or if you want to modify the FlashCopy mapping. When a FlashCopy mapping is stopped, the target volume becomes invalid and is set offline by the SVC, as shown in Figure 10-225 on page 762.

Perform the following steps to stop a FlashCopy Consistency Group: 1. From the SVC Overview panel, click Copy Services and then click the FlashCopy, Consistency Groups, or the FlashCopy Mappings panel. 1. Select the FlashCopy mapping that you want to stop in the table. 2. Click Stop in the Actions menu (Figure 10-223) to stop the FlashCopy mapping.

Chapter 10. SAN Volume Controller operations using the GUI

761

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-223 Stop action

3. Notice that the FlashCopy mapping status has changed to Stopped (Figure 10-224).

Figure 10-224 FlashCopy Consistency Group status

4. The targeted volume is now shown as Offline in the Volumes menu (Figure 10-225).

Figure 10-225 Targeted volume is offline as shown by the marker

762

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

10.8.18 Stopping the FlashCopy mapping


When a FlashCopy is stopped, the target volumes become invalid and are set offline by the SVC. The FlashCopy mapping must be retriggered to bring the target volumes online again. Important: Only stop a FlashCopy mapping when the data on the target volume is useless, or if you want to modify the FlashCopy mapping. When a FlashCopy mapping is stopped, the target volume becomes invalid and is set offline by the SVC.

Perform the following steps to stop a FlashCopy mapping: 1. From the SVC Welcome panel, click Copy Services and then click the Consistency Groups panel. 1. In the left side of this panel, select the Consistency Group that you want to stop. 2. Click Stop in the Actions menu (Figure 10-226) to stop the FlashCopy Consistency Group.

Figure 10-226 Stopping the FlashCopy Consistency Group

3. Notice that the FlashCopy Consistency Group status has now changed to Stopped (Figure 10-227 on page 763).

Figure 10-227 FlashCopy Consistency Group status

10.8.19 Migrating between a fully allocated volume and a Space-Efficient volume


If you want to migrate from a fully allocated volume to a Space-Efficient volume, follow the same procedure as described in 10.8.1, Creating a FlashCopy Mapping on page 728. However, make sure that you either select a Space-Efficient volume that has already been created as your target volume, or create one. You can use this same method to migrate from a Space-Efficient volume to a fully allocated volume.

Chapter 10. SAN Volume Controller operations using the GUI

763

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Create a FlashCopy mapping with the fully allocated volume as the source and the Space-Efficient volume as the target. Important: The copy process overwrites all of the data on the target volume. You must back up all of the data before you start the copy process.

10.8.20 Reversing and splitting a FlashCopy mapping


You can now perform a reverse FlashCopy mapping without having to remove the original FlashCopy mapping, and without restarting a FlashCopy mapping from the beginning. Figure 10-228 on page 764 shows an example of reverse FlashCopy dependency. You can start a FlashCopy mapping whose target is the source of another FlashCopy mapping.

Figure 10-228 Dependent Mappings

This capability enables you to reverse the direction of a FlashCopy map without having to remove existing maps, and without losing the data from the target as shown in Figure 10-229.

Figure 10-229 Reverse FlashCopy

10.9 Copy Services: managing Remote Copy


It is often easier to control working with Metro Mirror or Global Mirror by using the GUI, as long as you have a small number of mappings. When using many mappings, use the CLI to execute your commands. Note: See Chapter 8, Advanced Copy Services on page 373 for more information about the functionality of Copy Services in the SVC environment.

764

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

In this section, we describe the tasks that you can perform at a remote copy level. There are two panels to use to visualize and manage your remote copies: 1. The Remote Copy panel, shown in Figure 10-230 on page 765 The Metro Mirror and Global Mirror Copy Services features enable you to set up a relationship between two volumes, so that updates that are made by an application to one volume are mirrored on the other volume. The volumes can be in the same cluster or on two different clusters.

Figure 10-230 Remote Copy panel

2. The Partnerships panel, shown in Figure 10-231 on page 766 Partnerships can be used to create a disaster recovery environment, or to migrate data between clusters that are in different locations. Partnerships define an association between a local cluster and a remote cluster.

Chapter 10. SAN Volume Controller operations using the GUI

765

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-231 Partnerships panel

10.9.1 Cluster partnership


You have the opportunity to create more than a one-to-one cluster partnership. You can have a cluster partnership among multiple SVC clusters, which allows you to create four types of configurations, using a maximum of four connected clusters: Star configuration, as shown in Figure 10-232.

Figure 10-232 Star configuration

Triangle configuration, as shown in Figure 10-233 on page 767.

766

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-233 Triangle configuration

Fully connected configuration, as shown in Figure 10-234.

Figure 10-234 Fully connected configuration

Daisy-chain configuration, as shown in Figure 10-235.

Figure 10-235 Daisy-chain configuration

Important: All SVC clusters must be at level 5.1 or higher.

Chapter 10. SAN Volume Controller operations using the GUI

767

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

10.9.2 Creating the SVC partnership between two remote SVC Clusters
We perform this operation to create the partnership on both clusters. Note: If you are creating an intracluster Metro Mirror, do not perform this next step to create the SVC cluster Metro Mirror partnership. Instead, go to 10.9.3, Creating stand-alone remote copy relationships on page 770. To create a partnership between the SVC clusters using the GUI, follow these steps: 1. From the SVC Overview panel, click Copy Services Partnerships. The Partnerships panel opens as shown in Figure 10-236.

Figure 10-236 Partnerships panel

2. Click the New Partnership button to create a new partnership with another cluster, as shown in Figure 10-237.

Figure 10-237 New partnership button

3. On the New Partnership window (Figure 10-238 on page 769), complete the following elements: Select an available cluster in the drop-down list. If there is no candidate, you will receive the following error message: This cluster does not have any candidates. Enter a bandwidth (MBps) that is used by the background copy process between the clusters in the partnership. Set this value so that it is less than or equal to the 768
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

bandwidth that can be sustained by the communication link between the cluster. The link must be able to sustain any host requests and the rate of background copy.

Figure 10-238 New partnership window

4. Click the Create button to confirm the partnership relation. As shown in Figure 10-239, our partnership is in the Partially Configured state, because we have only performed the work on one side of the partnership so far.

Figure 10-239 Viewing cluster partnerships

To fully configure the cluster partnership, we must perform the same steps on the other SVC cluster (ITSO_SVC3) as we did on this one (ITSO_SVC2). For simplicity and brevity, only the two most significant windows are shown when the partnership is fully configured. 5. Launching the SVC GUI for ITSO_SVC3, we select ITSO_SVC2 for the cluster partnership and specify the available bandwidth for the background copy, again 200 MBps, and then click Create. Now that both sides of the SVC cluster partnership are defined, the resulting windows shown in Figure 10-240 and Figure 10-241 on page 770 confirm that our cluster partnership is now in the Fully Configured state. Figure 10-240 shows Cluster ITSO-CLS1.

Figure 10-240 Cluster ITSO_SVC2 - Fully configured cluster partnership

Chapter 10. SAN Volume Controller operations using the GUI

769

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-241 on page 770 shows Cluster ITSO_SVC3.

Figure 10-241 Cluster ITSO_SVC3 - Fully configured cluster partnership

10.9.3 Creating stand-alone remote copy relationships


In this section, we create remote copy mappings for volumes with their respective remote targets. The source and target volumes have been created prior to this operation on both clusters. To perform this action, follow these steps: 1. From the SVC Overview panel, click Copy Services Remote Copy. 2. Click New Relationship as shown in Figure 10-242.

Figure 10-242 New relationship action

3. In the New Relationship window, select the type of relationship that you want to create (Figure 10-243 on page 771): Metro Mirror This is a type of remote copy that creates a synchronous copy of data from a primary volume to a secondary volume. A secondary volume can either be located on the same cluster or on another cluster. Global Mirror This provides a consistent copy of a source volume on a target volume. Data is written to the target volume asynchronously, so that the copy is continuously updated, but the copy might not contain the last few updates in the event that a disaster recovery operation is performed. Global Mirror with change Volumes This provides a consistent copy of a source volume on a target volume. Data is written to the target volume asynchronously so that the copy is continuously updated, Change volumes are used to record changes to the remote copy volume Changes can then be copied to the remote cluster asynchronously Flash Copy relationship exists between remote copy volume and change volume. FlashCopy mapping with Change Volume is for internal use User cannot manipulate it like a normal FlashCopy mapping

770

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Most svctask *fcmap commands will fail. Then, click Next.

Figure 10-243 Select the type of relation that you want to create

4. In the next window, select where the auxiliary volumes are located as shown in Figure 10-244: On this system - this means the volumes are located locally On another system - in this case, select the remote system from the drop-down list.

Figure 10-244 Auxiliary volumes location

5. In this window you can create new relationships. Select a volume in the Master drop-down list, then select a volume in the Auxiliary drop-down lists for this master and click Add (Figure 10-245 on page 772). If needed, repeat this action to create other relationships. Important: The Master and Auxiliary must be of equal size. So for a given source volume, only the targets with the appropriate size are returned.

Chapter 10. SAN Volume Controller operations using the GUI

771

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-245 Create relationships between master and auxiliary volumes

To remove a relation created, use the button shown in Figure 10-245. After all the relationships that you wanted to create are registered, click Next. 6. Select if the volumes are already synchronized or not as shown in Figure 10-246, then click Next.

Figure 10-246 Volumes synchronized

7. Finally, on the last window, select if you want to start to copy the data as shown in Figure 10-247 and then click Finish.

Figure 10-247 Synchronize now

The relationships are visible in the Remote Copy panel. If you selected to copy the data, you can see that their status is Inconsistent Copying. You can check the copying progress in the Running tasks as shown in Figure 10-248 on page 773.

772

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-248 Remote Copy panel with an inconsistent copying status

After the copy is finished, the relationships status changes to Consistent synchronized.

10.9.4 Creating a Consistency Group


To create a Consistency Group, follow these steps: 1. From the SVC Overview panel, click Copy Services Remote Copy. 2. Click New Consistency Group (Figure 10-249).

Figure 10-249 New Consistency Group action

3. Enter a name for the Consistency Group and then click Next (Figure 10-250).

Chapter 10. SAN Volume Controller operations using the GUI

773

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Note: If you do not provide a name, the SVC automatically generates the name rccstgrpX, where X is the ID sequence number that is assigned by the SVC internally. You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The Consistency Group can be between 1 and 15 characters in length.

Figure 10-250 Enter a Consistency Group name

4. In the next window, select where the auxiliary volumes are located as shown in Figure 10-251: On this system - this means the volumes are located locally On another system - in that case, select the remote system in the drop-down list. After you make a selection, click Next.

Figure 10-251 Auxiliary volumes location

5. Select if you want to add relationships to this group as shown in Figure 10-252. There are two options: If you answer Yes. click Next to continue the wizard and go to step 6. If you answer No, click Finish to create an empty Consistency Group that can be used later.

Figure 10-252 Add relationships to this group

774

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

6. Select the type of relationship that you want to create (Figure 10-253): Metro Mirror This is a type of remote copy that creates a synchronous copy of data from a primary volume to a secondary volume. A secondary volume can either be located on the same cluster or on another cluster. Global Mirror This provides a consistent copy of a source volume on a target volume. Data is written to the target volume asynchronously so that the copy is continuously updated, but the copy might not contain the last few updates in the event that a disaster recovery operation is performed. Global Mirror With Change Volumes This provides a consistent copy of a source volume on a target volume. Data is written to the target volume asynchronously so that the copy is continuously updated, Change volumes are used to record changes to the remote copy volume Changes can then be copied to the remote cluster asynchronously Flash Copy relationship exists between remote copy volume and change volume. FlashCopy mapping with Change Volume is for internal use User cannot manipulate it like a normal FlashCopy mapping Most svctask *fcmap commands will fail. Click Next.

Figure 10-253 Select the type of relation that you want to create

7. As shown in Figure 10-254, you can optionally select existing relationships to add to the group, then click Next. Note: To select multiple relationships, hold down Ctrl and use your mouse to select the entries you want to include.

Chapter 10. SAN Volume Controller operations using the GUI

775

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-254 Select existing relationships to add to the group

8. In this window, you can create new relationships. Select a volume in the Master drop-down list then select a volume in the Auxiliary drop-down lists for this master. Click Add as shown in Figure 10-255. Repeat this action to create other relationships if needed. Important: The Master and Auxiliary must be of equal size. So for a given source volume, only the targets with the appropriate size are included. To remove a relation created, use the button as shown in Figure 10-255. After all the relationships that you want to create are registered, click Next.

Figure 10-255 Create relationships between Master and Auxiliary volumes

9. Select if the volumes are already synchronized or not, as shown in Figure 10-256, then click Next.

776

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-256 Volumes synchronized

10.Finally, on the last window, select if you want to start to copy the data as shown in Figure 10-257 on page 777, and then click Finish.

Figure 10-257 Synchronize now

11.The relationships are visible in the Remote copy panel. If you selected to copy the data, you can see that their status is Inconsistent Copying. You can check the copying progress in the Running tasks as shown in Figure 10-258.

Figure 10-258 Consistency Group created with relationship in copying status

After the copies are completed, the relationships and the Consistency Group changes to Consistent synchronized status.

Chapter 10. SAN Volume Controller operations using the GUI

777

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

10.9.5 Renaming a Consistency Group


To rename a Consistency Group, perform the following steps: 1. From the SVC Overview panel, click Copy Services menu Remote Copy. 2. Select the Consistency Group that you want to rename in the panel. Then select Rename in the Action Menu, as shown in Figure 10-259 on page 778.

Figure 10-259 Renaming a Consistency Group

3. Type the new name that you want to assign to the Consistency Group and press Enter (Figure 10-260).

Figure 10-260 Changing the name for a Consistency Group

Consistency Group name: The Consistency Group name can consist of the letters A to Z and a to z, the numbers 0 to 9, the dash (-), and the underscore (_) character. The name can be between one and 15 characters in length. However, the name cannot start with a number, the dash, or the underscore.

4. From the Remote Copy panel, the new Consistency Group name is displayed.

10.9.6 Renaming a Remote Copy relationship


Perform the following steps to rename a Remote Copy relationship: 1. From the SVC Overview panel, click Copy Services menu Remote Copy.

778

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

2. Select the Remote Copy relationship mapping that you want to rename in the table. 3. Click Rename in the Actions menu (Figure 10-261 on page 779). Tip: You can also right-click a Remote Copy relationship and select Rename from the list.

Figure 10-261 Rename Remote Copy relationship Action

4. In the Rename relationship window, type the new name that you want to assign to the FlashCopy mapping and click OK (Figure 10-262).

Figure 10-262 Renaming a remote copy relationship

Remote Copy relationship name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The Remote Copy name can be between one and 15 characters in length.

10.9.7 Moving a stand-alone Remote Copy relationship to a Consistency Group


Perform the following steps to move a Remote Copy relationship to a Consistency Group: 1. From the SVC Overview panel, click Copy Services Remote Copy. 2. Expand the Column Not in a Group.

Chapter 10. SAN Volume Controller operations using the GUI

779

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

3. Select the relationship that you want to move to Consistency Group. 4. Click Add to Consistency Group in the Actions menu as shown in Figure 10-263 on page 780. Tip: You can also right-click a Remote Copy relationship and select Add to Consistency Group from the list.

Figure 10-263 Adding to Consistency Group action

5. In the Add Relationship to Consistency Group window, select the Consistency Group for this Remote Copy relationship using the drop-down list (Figure 10-264).

Figure 10-264 Adding a relationship to a Consistency Group

6. Click Add to Consistency Group to confirm your changes.

10.9.8 Removing Remote Copy relationship from a Consistency Group


Perform the following steps to remove a Remote Copy relationship from a Consistency Group: 1. From the SVC Overview panel, click Copy Services Remote Copy. 2. Select a Consistency Group. 3. Select the Remote Copy relationship that you want to remove from a Consistency Group.

780

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

4. Click Remove from Consistency Group in the Actions menu (Figure 10-265 on page 781). Tip: You can also right-click a Remote Copy relationship and select Remove from Consistency Group from the list.

Figure 10-265 Remove from Consistency Group action

5. In the Remove Relationship From Consistency Group window, click Remove (Figure 10-266).

Figure 10-266 Remove relationship from Consistency Group

10.9.9 Starting a Remote Copy relationship


When a Remote Copy relationship is created, the Remote Copy process can be started. Only relationships that are not members of a Consistency Group, or the only relationship in a Consistency Group, can be started individually. Perform the following steps to start a Remote Copy relationship: 1. From the SVC Overview panel, click Copy Services Remote Copy. 2. Expand the Column Not in a Group.

Chapter 10. SAN Volume Controller operations using the GUI

781

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

3. Select the Remote Copy relationship that you want to start in the table. 4. Click Start in the Actions menu (Figure 10-267 on page 782) to start the Remote Copy process. Tip: You can also right-click a relationship and select Start from the list.

Figure 10-267 Start action

5. If the relationship was not consistent, the Remote Copy progress can be checked in the Running tasks (Figure 10-268).

Figure 10-268 Checking Remote Copy synchronization progress

6. After the task is completed, the Remote Copy relationship status has a Consistent Synchronized state (Figure 10-218 on page 760).

782

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-269 Consistent synchronized Remote Copy relationship

10.9.10 Starting a Remote Copy Consistency Group


All of the mappings in a Consistency Group will be brought to the same state. To start the Remote Copy Consistency Group, follow these steps: 1. From the SVC Overview panel, click Copy Services Remote Copy. 2. Select the Consistency Group that you want to start (Figure 10-270).

Figure 10-270 Remote Copy Consistency Groups view

3. Click Start in the Actions menu (Figure 10-271) to start the Remote Copy Consistency Group.

Figure 10-271 Start action

4. You can check the Remote Copy Consistency Group progress as shown in Figure 10-272 on page 784.

Chapter 10. SAN Volume Controller operations using the GUI

783

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-272 Checking Remote Copy Consistency Group progress

5. After the task is completed, the Consistency Group and all its relationship statuses are in a Consistent Synchronized state (Figure 10-273).

Figure 10-273 Consistent synchronized Consistency Group

10.9.11 Switching the copy direction for a Remote Copy relationship


When a Remote Copy relationship is in the Consistent Synchronized state, the copy direction for the relationship can be changed. Only relationships that are not a member of a Consistency Group, or the only relationship in a Consistency Group, can be switched individually. Such relationships can be switched from master to auxiliary or from auxiliary to master, depending the case. Important: When the copy direction is switched, it is crucial that no outstanding I/O exists to the volume that transits from primary to secondary, because all of the I/O will be inhibited to that volume when it becomes the secondary. Therefore, careful planning is required prior to switching the copy direction for a Remote Copy relationship. Perform the following steps to switch a Remote Copy relationship: 1. From the SVC Overview panel, click Copy Services Remote Copy. 2. Expand the column; Not in a Group. 3. Select the Remote Copy relationship that you want to switch in the table. 4. Click Switch in the Actions menu (Figure 10-274) to start the Remote Copy process. Tip: You can also right-click a relationship and select Switch from the list.

784

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-274 Switch Copy Direction action

5. A Warning window opens (Figure 10-275). A confirmation is needed to switch the Remote Copy relationship direction. As shown in Figure 10-275, the Remote Copy is switched from the master volume to the auxiliary volume. Click OK to confirm your choice.

Figure 10-275 Warning Window

6. The copy direction is now switched as shown in Figure 10-276. The auxiliary volume is now accessible and indicated as the primary volume. There is now a synchronization between auxiliary to master volume.

Figure 10-276 Checking Remote Copy synchronization direction

Chapter 10. SAN Volume Controller operations using the GUI

785

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

10.9.12 Switching the copy direction for a Consistency Group


When a Consistency Group is in the Consistent Synchronized state, the copy direction for this Consistency Group can be changed. Important: When the copy direction is switched, it is crucial that no outstanding I/O exists to the volume that transits from primary to secondary, because all of the I/O will be inhibited to that volume when it becomes the secondary. Therefore, careful planning is required prior to switching the copy direction for a Consistency Group. Perform the following steps to switch a Consistency Group: 1. From the SVC Overview panel, click Copy Services Remote Copy. 2. Select the Consistency Group you want to switch. 3. Click Switch in the Actions menu (Figure 10-277) to start the Remote Copy process.

Tip: You can also right-click a relationship and select Switch from the list.

Figure 10-277 Switch action

4. A Warning window opens (Figure 10-278 on page 787). A confirmation is needed to switch the Consistency Group direction. In the example shown in Figure 10-278 on page 787, the Consistency Group is switched from the master group to the auxiliary group. Click OK to confirm your choice.

786

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-278 Warning window for ITSO_SVC2

5. The Remote Copy direction is now switched, as shown in Figure 10-279. The auxiliary volume is now accessible and indicated as primary volume. There is now a synchronization from auxiliary to master volume.

Figure 10-279 Checking Consistency Group synchronization direction

10.9.13 Stopping a Remote Copy relationship


After it is started, the Remote Copy process can be stopped if needed. Only relationships that are not a member of a Consistency Group, or the only relationship in a Consistency Group, can be stopped individually. You can also use this command to enable write access to a consistent secondary volume. Perform the following steps to stop a Remote Copy relationship: 1. From the SVC Overview panel, click Copy Services Remote Copy. 2. Expand the column; Not in a Group. 3. Select the remote copy relationship that you want to stop in the table. 4. Click Stop in the Actions menu (Figure 10-280 on page 788) to stop the Remote Copy process. Tip: You can also right-click a relationship and select Stop from the list.

Chapter 10. SAN Volume Controller operations using the GUI

787

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-280 Stop action

5. The Stop Remove Copy Relationship window opens (Figure 10-281). To allow secondary read/write access, select Allow secondary read/write access then click Stop Relationship to confirm your choice.

Figure 10-281 Stop Remote Copy Relationship window

6. The new relationship status can be checked as shown in Figure 10-282. The relationship is now stopped.

Figure 10-282 Checking Remote Copy synchronization status

10.9.14 Stopping a Consistency Group


After it is started, the Consistency Group can be stopped if necessary. You can also use this command to enable write access to consistent secondary volumes. Perform the following steps to stop a Consistency Group: 1. From the SVC Overview panel, click Copy Services Remote Copy. 2. Select the Consistency Group that you want to stop in the table. 788
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

3. Click Stop in the Actions menu (Figure 10-283) to stop the Remote Copy Consistency Group. Tip: You can also right-click a relationship and select Stop from the list.

Figure 10-283 Stop action

4. The Stop Remote Copy Consistency Group window opens (Figure 10-284). To allow secondary read/write access, select Allow secondary read/write access then click Stop Consistency Group to confirm your choice.

Figure 10-284 Stop Remote Copy Consistency Group window

5. The new relationship status can be checked as shown in Figure 10-285. The relationship is now stopped.

Figure 10-285 Checking Remote Copy synchronization status

Chapter 10. SAN Volume Controller operations using the GUI

789

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

10.9.15 Deleting stand-alone Remote Copy relationships


Perform the following steps to delete a stand-alone Remote Copy mapping: 1. From the SVC Overview panel, click Copy Services Remote Copy. 2. Select the remote copy relationship that you want to delete in the table. Note: To select multiple remote copy mappings, hold down Ctrl and use your mouse to select the entries you want. 3. Click Delete relationship in the Actions menu (Figure 10-286). Tip: You can also right-click a Remote Copy mapping and select Delete Relationship from the list.

Figure 10-286 Delete Relationship action

4. The Delete Relationship window opens (Figure 10-287 on page 790). In the field Verify the number of relationships you are deleting, enter a value matching the correct number of volumes that you want to remove. This verification has been added to secure the process of deleting wrong relationships. Click Delete to complete the operation (Figure 10-287 on page 790).

Figure 10-287 Delete Remote Copy relationship

10.9.16 Deleting a Consistency Group


Important: Deleting a Consistency Group does not delete its Remote Copy mappings.

790

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Perform the following steps to delete a Consistency Group: 1. From the SVC Overview panel, click Copy Services Remote Copy. 2. Select the Consistency Group that you want to delete in the left column. 3. Click Delete in the Actions menu (Figure 10-288).

Figure 10-288 Delete Consistency Group action

4. A Warning window opens as shown in Figure 10-289. Click OK to complete the operation.

Figure 10-289 Confirmation message

10.10 Managing the cluster using the GUI


This section explains the various configuration and administrative tasks that you can perform on the cluster.

10.10.1 System Status information


From the System Status panel, perform the following steps to display the cluster and nodes information: 1. From the SVC Overview panel, select Monitoring System . 2. The System Status panel (Figure 10-290) opens.

Chapter 10. SAN Volume Controller operations using the GUI

791

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-290 System Status panel

By simply moving the mouse over the tower in the left part of the panel, you are able to view the global storage usage as shown in Figure 10-291 on page 792. Using this method, you can monitor the Physical Capacity and the Used Capacity of your cluster.

Figure 10-291 Physical Capacity information

792

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

10.10.2 View I/O groups and their associated nodes


The right side of the System Status panel shows an overview of the cluster with I/O groups and their associated nodes. In this dynamic illustration, the node status can be checked by using a color code depending on the status (Figure 10-292).

Figure 10-292 Cluster view with node status

10.10.3 View cluster properties


1. From the System Status panel, to obtain information about the cluster, click the cluster as shown in Figure 10-293 on page 793.

Figure 10-293 General cluster information Chapter 10. SAN Volume Controller operations using the GUI

793

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

2. When you click the Info tab, the following information is displayed: General information Name ID Location Capacity information Total MDisk Capacity Space in MDisk Groups Space Allocated to Volumes Total Free Space Total Volume Capacity Total Volume Copy Capacity Total Used Capacity Total Over Allocation

10.10.4 Renaming an SVC cluster


From the System Status panel, perform the following steps to rename the cluster: 1. Click the cluster name as shown in Figure 10-293. 2. Click Manage. 3. Specify a new name for the cluster as shown in Figure 10-294.

Figure 10-294 Manage tab: Change cluster name

Cluster name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The node name can be between one and 63 characters in length. 4. Click Save. 5. A Warning window opens as shown in Figure 10-295. In fact, if you are using the iSCSI protocol, changing either name also changes the iSCSI Qualified Name (IQN) of all of the nodes in the cluster and might require reconfiguration of all iSCSI-attached hosts. This is because the IQN for each node is generated using the cluster and node names.

794

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-295 Warning window

6. Click OK to confirm that you want to change the cluster name.

10.10.5 Shutting down a cluster


If all input power to a SAN volume Controller cluster is removed for more than a few minutes (for example, if the machine room power is shut down for maintenance), it is important that you shut down the cluster before you remove the power. Shutting down the cluster while it is still connected to the main power ensures that the uninterruptible power supply unit batteries will still be fully charged when power is restored. If you remove the mains power while the cluster is still running, the uninterruptible power supply unit will detect the loss of power and instruct the nodes to shut down. This shutdown can take several minutes to complete, and although the uninterruptible power supply unit has sufficient power to perform the shutdown, you will be unnecessarily draining the uninterruptible power supply unit batteries. When power is restored, the SVC nodes will start. However, one of the first checks that the SVC nodes make is to ensure that the uninterruptible power supply unit batteries have sufficient power to survive another power failure, thereby enabling the node to perform a clean shutdown. (You do not want the uninterruptible power supply unit to run out of power when the nodes shutdown activities have not yet completed). If the uninterruptible power supply unit batteries are not sufficiently charged, the node will not start. Be aware that it can take up to three hours to charge the batteries sufficiently for a node to start. Note: When a node shuts down due to loss of power, the node will dump the cache to an internal hard drive so that the cached data can be retrieved when the cluster starts. With 8F2/8G4 nodes, the cache is 8 GB. With CF8/CG8, the cache is 24 GB. So it can take several minutes to dump to the internal drive. SVC uninterruptible power supply units are designed to survive at least two power failures in a short time before nodes will refuse to start until the batteries have sufficient power (to survive another immediate power failure). If, during your maintenance activities, the uninterruptible power supply unit detected power and a loss of power multiple times (and thus the nodes start and shut down more than one time in a short time frame), you might find that you have unknowingly drained the uninterruptible power supply unit batteries. You will have to wait until they are charged sufficiently before the nodes will start.

Chapter 10. SAN Volume Controller operations using the GUI

795

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Important: Before shutting down a cluster, quiesce all I/O operations that are destined for this cluster, because you will lose access to all of the volumes that are provided by this cluster. Failure to do so might result in failed I/O operations being reported to your host operating systems. There is no need to quiesce all I/O operations if you are only shutting down one SVC node. Begin the process of quiescing all I/O to the cluster by stopping the applications on your hosts that are using the volumes that are provided by the cluster. If you are unsure which hosts are using the volumes that are provided by the cluster, follow the procedure explained in 9.5.21, Showing the host to which the volume is mapped on page 508, and repeat this procedure for all volumes. From the System Status panel, perform the following steps to shut down your cluster: 1. Click the cluster name as shown in Figure 10-296.

Figure 10-296 General cluster information

2. Click the Manage tab and then click Shut Down Cluster as shown in Figure 10-297 on page 796.

Figure 10-297 Manage tab: Shut Down Cluster

3. The Confirm Cluster Shutdown cluster window (Figure 10-298) opens. You will receive a message asking you to confirm whether you want to shut down the cluster. Ensure that you have stopped all FlashCopy mappings, Remote Copy relationships, data migration

796

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

operations, and forced deletions before continuing. Click Yes to begin the shutdown process. Important: At this point, you will lose administrative contact with your cluster.

Figure 10-298 Shutting down the cluster confirmation window

You have now completed the required tasks to shut down the cluster. At this point you can shut down the uninterruptible power supply units by pressing the power buttons on their front panels. Tip: When you shut down the cluster, it will not automatically start. You must manually start the cluster. If the cluster shuts down because the uninterruptible power supply unit has detected a loss of power, it will automatically restart when the uninterruptible power supply unit detects that the power has been restored (and the batteries have sufficient power to survive another immediate power failure).

Note: To restart the SVC cluster, you must first restart the uninterruptible power supply units by pressing the power buttons on their front panels. After they are on, go to the service panel of one of the nodes within your SVC cluster and press the power on button, releasing it quickly. After it is fully booted (for example, displaying Cluster: on line 1 and the cluster name on line 2 of the SVC front panel), you can start the other nodes in the same way. As soon as all nodes are fully booted and you have reestablished administrative contact using the GUI, your cluster is fully operational again.

10.10.6 Upgrading software


From the System Status panel, perform the following steps to upgrade the software of your cluster: 1. Click the cluster name as shown in Figure 10-296 on page 796.

Chapter 10. SAN Volume Controller operations using the GUI

797

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-299 General cluster information

2. Click the Manage tab and then click Upgrade Cluster as shown in Figure 10-300.

Figure 10-300 Manage tab: Software update link

3. Follow the instruction provided in 10.15.11, Upgrading software on page 854.

10.11 Managing I/O Groups


In the following sections we illustrate how to manage I/O Groups.

10.11.1 View I/O group properties


From the System Status panel, you can see the I/O group properties. 1. Click an I/O group as shown in Figure 10-301.

798

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-301 I/O group information

2. Click the Info tab to obtain the following information: General information Name ID Numbers of Nodes Numbers of Hosts Numbers of Volumes Memory information FlashCopy Global Mirror and Metro Mirror Volume Mirroring RAID

10.11.2 Modifying I/O group properties


From the System Status panel, perform the following steps to modify your cluster: 1. Click an I/O group as shown in Figure 10-302 on page 800.

Chapter 10. SAN Volume Controller operations using the GUI

799

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-302 I/O group information

2. Click the Manage tab. 3. From this tab, as shown in Figure 10-303 on page 801, you can modify: The I/O Group name I/O Group name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The host name can be between one and 63 characters in length. The amount of memory for the following features: FlashCopy (default 20 MB - maximum 512 MB) Global Mirror and Metro Mirror (default 20 MB - maximum 512 MB) Volume Mirroring (default 20 MB - maximum 512 MB) RAID (default 40 MB - maximum 512 MB)

Important: For Volume mirroring, Copy Services (FlashCopy, Metro Mirror, and Global Mirror) and RAID operations, memory is traded against memory that is available to the cache. The amount of memory can be decreased or increased. The maximum combined memory size across all features is 552 MB.

800

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-303 Modify I/O Group properties window

10.12 Managing nodes


In this section we show how to manage nodes.

10.12.1 View node properties


From this panel, you can obtain detailed information about node properties. 1. Click a node as shown in Figure 10-304.

Figure 10-304 Node information

Chapter 10. SAN Volume Controller operations using the GUI

801

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

2. Click the Info tab and to obtain the following information: General information Name ID Status Hardware WWNN I/O Group Configuration node Failover Partner node iSCSI Name (IQN) iSCSI Alias Failover iSCSI Name Failover iSCSI Alias if iSCSI Failover is active Serial Number Unique ID WWPNs Status Speed

Redundancy information

iSCSI information

UPS information

Ports information

3. Click the VPD tab to display the vital product data (VPD) for this node. Note: The amount of information in the vital product data (VPD) tab is extensive, so we do not describe it in this section. For the list of these elements, refer to Command-Line Interface User's Guide - Version 6.3.0 and search for the nodevpd command.

10.12.2 Renaming a node


From the System Status panel, perform the following steps to rename a node: 1. Click a node as shown in Figure 10-305 on page 803.

802

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-305 Node information window

2. Click the Manage tab. 3. Specify a new name for the node as shown in Figure 10-306.

Figure 10-306 Manage tab: Change node name

Node name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The node name can be between one and 63 characters in length.

Chapter 10. SAN Volume Controller operations using the GUI

803

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

4. Click Save. 5. A Warning window opens as shown in Figure 10-307 on page 804. This is due to the fact that the iSCSI Qualified Name (IQN) for each node is generated using the cluster and node names. If you are using the iSCSI protocol, changing either name also changes the IQN of all of the nodes in the cluster and might require reconfiguration of all iSCSI-attached hosts.

Figure 10-307 Warning window - changing the node name

6. To confirm that you want to change the node name, click OK.

10.12.3 Adding a node to the cluster


To complete this operation, perform the following steps: 1. Click an empty node position to view the candidate nodes as shown in Figure 10-308.

Figure 10-308 Add node window

Important: Keep in mind that you need to have at least two nodes in an I/O group. Add your available nodes in sequence. 2. Select the node you want to add to your cluster using the drop-down list. Change its name, if needed, and click Add Node as shown in Figure 10-309 on page 805.

804

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-309 Add a node to the cluster

3. As shown in Figure 10-310, a window appears to inform you about the time required to add a node to the cluster.

Figure 10-310 Warning message

4. If you want to add it, click OK. Important: When a node is added to a cluster, it displays a state of adding and a yellow color, as shown in Figure 10-292 on page 793. It can take as long as 30 minutes for the node to be added to the cluster, particularly if the software version of the node has changed.

10.12.4 Removing a node from the cluster


From the System Status panel, perform the following steps to remove a node: 1. Click a node as shown in Figure 10-311 on page 806.

Chapter 10. SAN Volume Controller operations using the GUI

805

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-311 Node information window

2. Click the Manage tab and then click Remove node as shown in Figure 10-312.

Figure 10-312 Manage tab: Remove node

3. A Warning window opens as shown in Figure 10-313 on page 807. By default, the cache is flushed before the node is deleted to prevent data loss if a failure occurs on the other node in the I/O group. In certain circumstances, such as when the system is already degraded, you can take the specified node offline immediately without flushing the cache or ensuring data loss does

806

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

not occur, by selecting the Bypass check for volumes that will go offline, and remove the node immediately without flushing its cache check box.

Figure 10-313 Warning window - removing a node

If this node is the last node in the cluster the warning message is different, as shown in Figure 10-314. Before you delete the last node in the cluster, ensure that you want to destroy the cluster. Removing the last node in the cluster destroys the cluster. The user interface and any open CLI sessions are lost.

Figure 10-314 Warning window for the last node

4. If you want to remove it, click OK. This makes the node a candidate to be added back into this cluster or into another cluster.

10.13 Troubleshooting
Events detected by the system are saved in an event log. When an entry is made in this event log, the condition is analyzed and classified to help you diagnose problems.

10.13.1 Monitoring panel


The Monitoring Actions panel (Figure 10-315 on page 808) displays event conditions that require action, and procedures to diagnose and fix them. To access this panel, perform the following action: From the Overview panel that is shown in Figure 10-1 on page 632, select Monitoring Events.
Chapter 10. SAN Volume Controller operations using the GUI

807

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-315 Recommended Actions panel

The highest-priority event is indicated, along with information about how long ago the event occurred. It is important to note that if an event is reported, you must select the event and run a fix procedure.

Event properties
To retrieve properties and sense about a specific event, perform the following steps: 1. Select an event in the table. 2. Click Properties in the Actions menu (Figure 10-316 on page 808).

Figure 10-316 Event properties action

Tip: You can also obtain access to the Properties action by right-clicking an event. 808

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

3. The Properties and Sense Data for Event sequence_number window (where sequence_number is the sequence number of the event that you selected in the previous step) opens, as shown in Figure 10-317 on page 809.

Figure 10-317 Properties and sense data for event window

Tip: From the Properties and Sense Data for Event window, you can use the Previous and Next buttons to navigate between events. 4. Click Close to return to the Recommended Actions panel.

Run Fix Procedure


To run a procedure to fix event, perform the following steps: 1. Select an event in the table. Tip: You can also click Run Fix Procedure on top of the panel (see Figure 10-318) to solve the most critical event. 2. Click Run Fix Procedure in the Actions menu (Figure 10-318 on page 810).

Chapter 10. SAN Volume Controller operations using the GUI

809

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-318 Run Fix Procedure Action

Tip: You can also obtain access to the Run Fix Procedure action by right-clicking an event. 3. The Directed Maintenance Procedure window opens as shown in Figure 10-319. You have to follow the wizard and its different steps to fix the event. Note: We do not describe here all the possible steps because the steps involve depend on the event.

Figure 10-319 Directed Maintenance Procedure wizard

4. Click Close to return to the Recommended Actions panel.

10.13.2 Event Log panel


The Event Log panel (Figure 10-320 on page 811) You have the choice to display Recommended Actions, Unfixed messages and alerts or Show All.

810

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

To access this panel, from the Overview panel shown in Figure 10-1 on page 632, select Monitoring Event Log and then in the left upper corner your choice of what you will see.

Figure 10-320 Event Log panel

Certain alerts have a four-digit error code and a fix procedure that helps you fix the problem. Other alerts also require action, but do not have a fix procedure. Messages are fixed when you acknowledge reading them.

Filtering events
You can filter events in different ways. Filtering can be based on event status (see Basic filtering), or over a period of time (see Time filtering on page 812). Certain events require a certain number of occurrences in 25 hours before they are displayed as unfixed. If they do not reach this threshold in 25 hours, they are flagged as expired. Monitoring events are below the coalesce threshold and are usually transient. You can also sort events by time or error code. When you sort by error code, the most serious events (those with the lowest numbers) are displayed first.

Basic filtering
The event log display can be filtered in three ways using the drop-down menu in the upper right corner of the panel (see Figure 10-321 on page 811): Display all unfixed alerts and messages: Recommended (events requiring attention) Display all alerts and messages: Unfixed Messages and Alerts Display all events alerts, messages, monitoring, and expired: Show all (include below-threshold events)

Figure 10-321 Filter even log display Chapter 10. SAN Volume Controller operations using the GUI

811

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Time filtering
There are two ways to perform time filtering: by selecting a start date and time and an end date and time; and by selecting an event and showing the entries within in a certain period of time of this event. In this section we demonstrate both methods. By selecting a start date and time, and an end date and time To use this time frame filter, perform the following steps: Click Filter by Date in the Actions menu (Figure 10-322).

Figure 10-322 Filter by date action

Tip: You can also obtain access to the Filter by Date action by right-clicking an event. The Date/Time Filter window opens (Figure 10-323). From this window, select a start date and time and an end date and time.

Figure 10-323 Date/Time Filter window

Click Filter and Close. Your panel is now filtered based on the time frame. To disable this time frame filter, click Reset Date Filter in the Actions menu (Figure 10-324 on page 813).

812

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-324 Reset Date Filter action

Select an event and show the entries within in a certain period of time of this event To use this time frame filter, perform the following steps: Select an event in the table. In the Actions menu, click Show entries within... and select minutes, hours, or days and finally select a value (Figure 10-325).

Figure 10-325 Show entries within... action

Tip: You can also access the Show entries within... action by right-clicking an event. c. Your window is now filtered based on the time frame (Figure 10-326).

Chapter 10. SAN Volume Controller operations using the GUI

813

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-326 Time frame filtering

To disable this time frame filter, click Reset Date Filter in the Actions menu (Figure 10-327).

Figure 10-327 Reset Date Filter action

Event properties
To retrieve properties and sense about a specific event, perform the following steps: 1. Select an event in the table. 2. Click Properties in the Actions menu (Figure 10-328).

814

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-328 Event properties action

Tip: You can also access the Properties action by right-clicking an event.

3. The Properties and Sense Data for Event sequence_number window (where sequence_number is the sequence number of the event that you selected in the previous step) opens, as shown in Figure 10-329.

Figure 10-329 Properties and sense data for event window

Tip: From the Properties and Sense Data for Event window, you can use the Previous and Next buttons to navigate between events.

Chapter 10. SAN Volume Controller operations using the GUI

815

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

4. Click Close to return to the Event log.

Mark an event as fixed


To mark one or more events as fixed, perform the following steps: 1. Select one or more entries in the table. Tip: To select multiple events, hold down the Ctrl key and use the mouse to select the entries you want to select. 2. Click Mark as fixed in the Actions menu (Figure 10-330).

Figure 10-330 Mark as fixed action

Tip: You can also access the Mark as fixed action by right-clicking an event. 3. The Warning window opens (Figure 10-331).

Figure 10-331 Warning window

4. Click OK to confirm your choice. Note: To be able to see fixed events, you need to filter the event log panel using the Expanded (include fixed events) filter profile or the Show all (include below-threshold events) filter profile.

816

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Mark an event as unfixed


To mark one or more events as unfixed, perform the following steps: 1. Select one or more entries in the table. Tip: To select multiple events, hold down the Ctrl key and use the mouse to select the entries you want to include. 2. Click Mark as unfixed in the Actions menu (Figure 10-332).

Figure 10-332 Mark as unfixed action

Tip: You can also access the Mark as unfixed action by right-clicking an event.

3. The Warning window opens (Figure 10-333 on page 817).

Figure 10-333 Warning message

4. Click OK to confirm your choice.

10.13.3 Run fix procedure


Note: Several alerts have a four-digit error code and a fix procedure that helps you fix the problem. Those are the steps described here. Other alerts also require action but do not have a fix procedure. Messages are fixed when you acknowledge reading them, as shown in Figure 10-334.

Chapter 10. SAN Volume Controller operations using the GUI

817

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

To run a procedure to fix alert, perform the following steps: 1. Select an alert with a four-digit error code in the table. 2. Click Run Fix Procedure in the Actions menu (Figure 10-334).

Figure 10-334 Run Fix Procedure action

Tip: You can also access the Run Fix Procedure action by right-clicking an alert.

3. The Directed Maintenance Procedure window opens (Figure 10-335 on page 818). You must follow the wizard and its steps to fix the event. Note: We do not describe all the various steps, because they depend on the alert.

Figure 10-335 Directed Maintenance Procedure wizard

4. Click Close to return to the Event Log window.

Clear log
To clear the logs, perform the following steps: 1. Click Clear Log (Figure 10-336).

818

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-336 Clear log button

2. A Warning window opens (Figure 10-337). From this window, you must confirm that you want to delete the logs.

Figure 10-337 Warning window

3. Click OK to confirm your choice.

10.13.4 Support panel


From the support panel shown in Figure 10-338, you can download support packages that contain log files and information that can be sent to support personnel to help troubleshoot the system. You can either download individual log files or download statesaves, which are dumps or livedumps of system data.

Chapter 10. SAN Volume Controller operations using the GUI

819

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-338 Support panel

Download support packages


To download the support packages, perform the following steps: 1. Click Download Support Packages (Figure 10-339).

Figure 10-339 Download Support Packages

2. A Download Support Packages window opens (Figure 10-340 on page 821). From there, select which kind of logs you want to download: Standard logs These contain the most recent logs that have been collected for the cluster. These logs are the most commonly used by support to diagnose and solve problems. Standard logs plus one existing statesave These contain the standard logs for the cluster and the most recent statesave from any of the nodes in the cluster. Statesaves are also known as dumps or livedumps. Standard logs plus most recent statesave from each node These contain the standard logs for the cluster and the most recent statesave from each node in the cluster. Statesaves are also known as dumps or livedumps. Standard logs plus new statesaves These generate a new statesave (livedump) for all the nodes in the cluster and packages them with the most recent logs.

820

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-340 Download Support Package window

Note: Depending on your choice, this action can take several minutes to complete.

3. Click Download to confirm your choice (Figure 10-340). 4. Finally, select where you want to save these logs (Figure 10-341).

Figure 10-341 Save the logs file on your workstation

Download individual packages


To manually download packages, perform the following tasks: 1. Activate the individual log files view (Figure 10-344 on page 822) by clicking the Show full log listing... link (Figure 10-342).

Figure 10-342 Show full log listing link

2. On the detailed view, select the node from which you want to download logs using the drop-down menu in the upper right corner of the panel (Figure 10-343 on page 822).

Chapter 10. SAN Volume Controller operations using the GUI

821

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-343 Node selection

3. Select the package or packages that you want to download (Figure 10-344).

Figure 10-344 Selection of individuals packages

Tip: To select multiple packages, hold down the Ctrl key and use the mouse to select the entries you want to include.

4. Click Download in the Actions menu (Figure 10-345).

822

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-345 Download packages

Tip: You can also access the Download action by right-clicking a package. 5. Finally, select where you want to save these logs in you workstation. Tip: You can also delete packages by clicking Delete in the Actions menu.

CIMOM Logging Level


Select this option to include CIMOM tracing components and logging details. Note: The maximum login level can have a significant impact on the performance of the CIMOM interface. To change the CIMOM Logging Level, use the drop-down menu in the upper right corner of the panel as shown in Figure 10-346: CIMOM Logging Level: Low CIMOM Logging Level: Medium CIMOM Logging Level: High

Chapter 10. SAN Volume Controller operations using the GUI

823

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-346 Change the CIMOM Login Level

10.14 User Management


Users are managed from within the User Management menu in the SAN Volume Controller GUI, as shown in Figure 10-347 on page 824.

Figure 10-347 User Management menu

Each user account has a name, a role, and password assigned to it, which differs from the Secure Shell (SSH) key-based role approach that is used by the CLI. Note that starting in 6.3 you can access the CLI with password, and no SSH key. We describe authentication in detail in 2.9, User authentication on page 44. The role-based security feature organizes the SVC administrative functions into groups, which are known as roles, so that permissions to execute the various functions can be granted differently to the separate administrative users. There are four major roles and one special role. Table 10-1 on page 825 lists the user roles.

824

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Table 10-1 Authority roles Role Security Admin Administrator Allowed Commands All commands All commands except: svctask: chauthservice, mkuser, rmuser, chuser, mkusergrp, rmusergrp, chusergrp, and setpwdreset All svcinfo commands and the following svctask commands: prestartfcconsistgrp, startfcconsistgrp, stopfcconsistgrp, chfcconsistgrp, prestartfcmap, startfcmap, stopfcmap, chfcmap, startrcconsistgrp, stoprcconsistgrp, switchrcconsistgrp, chrcconsistgrp, startrcrelationship, stoprcrelationship, switchrcrelationship, chrcrelationship, and chpartnership All svcinfo commands and the following svctask commands: applysoftware, setlocale, addnode, rmnode, cherrstate, writesernum, detectmdisk, includemdisk, clearerrlog, cleardumps, settimezone, stopcluster, startstats, stopstats, and settime All svcinfo commands and the following svctask commands: finderr, dumperrlog, dumpinternallog, chcurrentuser and the svcconfig command: backup User Superusers Administrators that control the SVC

Copy Operator

For users that control all copy functionality of the cluster

Service

For users that perform service maintenance and other hardware tasks on the cluster

Monitor

For users only needing view access

The superuser user is a built-in account that has the Security Admin user role permissions. You cannot change permissions or delete this superuser account; you can only change the password. You can also change this password manually on the front panels of the cluster nodes. An audit log keeps track of actions that are issued through the management GUI or the command-line interface. For more information about this topic, see 10.14.9, Audit log information on page 837.

Chapter 10. SAN Volume Controller operations using the GUI

825

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

10.14.1 Creating a user


Perform the following steps to create a user: 1. From the SVC Welcome panel, click User Management in the left menu, and then click the All Users panel. 2. Click New User (Figure 10-348).

Figure 10-348 Create New User

3. The New User window opens (Figure 10-349).

Figure 10-349 New User window

826

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Enter a new user name in the Name field. User name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The user name can be between one and 256 characters in length.

Authentication Mode section


There are two types of authentication available in this section: Local: The authentication method is located on the system. Users must be part of a user group which authorizes them to specific sets of operations. If you select this type of authentication, use the drop-down list to select the user group (Table 10-1 on page 825) that you want the user to be part of. Remote: Remote authentication allows users of SAN management applications, such as IBM Tivoli Storage Productivity Center, to authenticate to the cluster using the authentication service provided by the SAN management application. Ensure that the remote authentication service is configured for the SAN management application. To complete this task, you need the following information regarding the remote authentication service: The web address for the remote authentication service. The user name and password for HTTP basic authentication. These credentials are created by and obtained from the administrator of the remote authentication service.

Local credentials section


There are two types of local credentials that can be configured in this section, depending on your needs: GUI Authentication: The Password authenticates users to the management GUI. Enter the password in the Password field. Password: The password can be between 6 and 64 characters in length and it cannot begin or end with a space. CLI Authentication: The SSH Key authenticates users to the command-line interface. The SSH Public Key need to be uploaded using the Browse... button in the SSH Public Key field. If you havent created a SSH key pair, you can still access the SVC Cluster by using your username and password. 4. Then to create the user, click the Create button as shown in Figure 10-349 on page 826.

10.14.2 Modifying user properties


Perform the following steps to change user properties: 1. From the SVC Overview panel, click Access in the left menu, and then click the Users panel. 2. In the left column, select a User Group. 3. Select a user. 4. Click Properties in the Actions menu (Figure 10-350 on page 828).

Chapter 10. SAN Volume Controller operations using the GUI

827

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Tip: You can also change user properties by right-clicking a user and selecting Properties from the list.

Figure 10-350 User Properties Action

5. The User Properties window opens (Figure 10-351).

Figure 10-351 User Properties window

From this window, you can change the authentication mode and Local credentials. Authentication Mode There are two types of authentication available in this section: Local: The authentication method is located on the system. Users must be part of a user group which authorizes them to specific sets of operations. If you select this type of authentication, use the drop-down list to select the user group (Table 10-1 on page 825) that you want the user to be part of.

828

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Remote: Remote authentication allows users of SAN management applications, such as IBM Tivoli Storage Productivity Center, to authenticate to the cluster using the authentication service provided by the SAN management application. Ensure that the remote authentication service is configured for the SAN management application. To complete this task, you need the following information regarding the remote authentication service: The web address for the remote authentication service. The user name and password for HTTP basic authentication. These credentials are created by and obtained from the administrator of the remote authentication service. Local Credentials There are two types of local credentials that can be configured in this section depending on your needs: GUI authentication: The Password authenticates users to the management GUI. You need to enter the password in the Password field. Password: The password can be between 6 and 64 characters in length and it cannot begin or end with a space. CLI authentication: The SSH Key authenticates users to the command-line interface. The SSH Public Key need to be uploaded using the Browse... button in the SSH Public Key field. 6. To confirm the changes, click OK (see Figure 10-351 on page 828).

10.14.3 Removing a user password


Note: To be able to remove the password for a given user, the SSH Public Key must be defined. Otherwise, this action is not available. Perform the following steps to remove a user password: 1. From the SVC Overview panel, click Access and then click the Users panel. 2. Select the user. 3. Click Remove Password in the Actions menu as shown in Figure 10-352 on page 830. Tip: You can also remove the password by right-clicking a user and selecting Remove Password from the list.

Chapter 10. SAN Volume Controller operations using the GUI

829

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-352 Remove Password action

4. The Warning window opens (Figure 10-353). Click OK to complete the operation.

Figure 10-353 Warning window

10.14.4 Removing a user SSH Public Key


Note: To be able to remove the SSH Public Key for a given user, the password must be defined. Otherwise, this action is not available. Perform the following steps to remove a user password: 1. From the SVC Overview panel, click Access and then click the Users panel. 2. Select the user. 3. Click Remove SSH Key in the Actions menu as shown in Figure 10-354 on page 831. Tip: You can also remove the SSH Public Key by right-clicking a user and selecting Remove SSH Key from the list.

830

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-354 Remove SSH Key action

4. The Warning window opens (Figure 10-355). Click OK to complete the operation.

Figure 10-355 Warning window

10.14.5 Deleting a user


Perform the following steps to delete a user: 1. From the SVC Overview panel, click Access and then click the Users panel. 2. Select the user. Important: To select multiple users to delete, hold down the Ctrl key and use the mouse to select the entries you want to delete. 3. Click Delete in the Actions menu as shown in Figure 10-356 on page 832. Tip: You can also delete a user by right-clicking the user and selecting Delete from the list.

Chapter 10. SAN Volume Controller operations using the GUI

831

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-356 Delete action

4. The Delete User window opens (Figure 10-357). Click Delete to complete the operation.

Figure 10-357 Delete User window

10.14.6 Creating a user group


Five user groups are created by default on the SVC. If needed, you can create additional ones. Perform the following steps to create a user group: 1. From the SVC Overview panel, click Access in the left menu and then click the Users panel. 2. Click Global Actions and then select New User Group (Figure 10-358 on page 833).

832

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-358 Selecting New User Group

3. The New User Group window opens (Figure 10-359).

Figure 10-359 New User Group window

Enter a name for the group in the Group Name field. Group name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The group name can be between one and 63 characters in length. Role section A role needs to be selected between Monitor, Copy Operator, Service, Administrator or Security Administrator. See Table 10-1 on page 825 for more information about these roles.
Chapter 10. SAN Volume Controller operations using the GUI

833

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Note: Remote authentication allows users of SAN management applications, such as IBM Tivoli Storage Productivity Center, to authenticate to the cluster using the authentication service provided by the SAN management application. 4. To create the group name, click Create (Figure 10-359 on page 833). 5. You can verify the creation in the Users panel (Figure 10-360).

Figure 10-360 Verify user group creation

10.14.7 Modifying user group properties


Note: For preset user groups (SecurityAdmin, Administrator, CopyOperator, Service and Monitor), you cannot change their respective roles. You can only update the remote authentication section. Perform the following steps to change user properties: 1. From the SVC Overview panel, click Access in the left menu and then click the Users panel. 2. In the left column, select the User Group. 3. Click Properties in the Actions menu as shown in Figure 10-361 on page 835.

834

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-361 Properties action

4. The User Group Properties window opens (Figure 10-362).

Figure 10-362 User group properties window

From this window, you can change the role: Role A role needs to be selected between Monitor, Copy Operator, Service, Administrator or Security Administrator. See Table 10-1 on page 825 for more information about these roles. Note: Remote authentication allows users of SAN management applications, such as IBM Tivoli Storage Productivity Center, to authenticate to the cluster using the authentication service provided by the SAN management application.

Chapter 10. SAN Volume Controller operations using the GUI

835

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

5. To confirm the changes, click OK (Figure 10-362 on page 835).

10.14.8 Deleting a user group


Perform the following steps to delete a user group: 1. From the SVC Overview panel, click Access in the left menu and then click the Users panel. 2. In the left column, select the User Group. 3. Click Delete in the Actions menu (Figure 10-363). Important: You cannot delete preset user groups SecurityAdmin, Administrator, CopyOperator, Service, or Monitor.

Figure 10-363 Delete action

4. There are two options: If you do not have any users in this group, the Delete User Group window opens as shown in Figure 10-357 on page 832. Click Delete to complete the operation.

Figure 10-364 Delete user group window

836

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

If you have users in this group, the Delete User Group window opens as shown in Figure 10-365 on page 837. The users of this group will be moved to the Monitor user group.

Figure 10-365 Delete User Group window

10.14.9 Audit log information


An audit log keeps track of actions that are issued through the management GUI or the command-line interface. You can use the audit log to monitor user activity on your system. Perform the following steps to view the audit log: From the SVC Overview panel, click Access in the left menu and then click the Audit Log panel as shown in Figure 10-366. The audit log entries provide the following information: The time and date when the action or command was issued on the system The name of the user who performed the action or command. The IP address of the system where the action or command was issued The parameters that were issued with the command The results of the command or action The sequence number and the object identifier that is associated with the command or action

Chapter 10. SAN Volume Controller operations using the GUI

837

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-366 Audit log entries

Time filtering
There are two ways to perform time filtering: by selecting a start date and time and an end date and time; and by selecting an event and showing the entries within in a certain period of time of this event. In this section we demonstrate both methods. By selecting a start date and time and an end date and time To use this time frame filter, perform the following steps: Click Filter by Date in the Actions menu (Figure 10-367).

Figure 10-367 Filter by date action

Tip: You can also access the Filter by Date action by right-clicking an entry.

838

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

The Date/Time Filter window opens (Figure 10-368). From this window, select a start date and time and an end date and time.

Figure 10-368 Date/Time Filter window

Click Filter and Close. Your panel is now filtered based on its time frame. To disable this time frame filter, click Reset Date Filter in the Actions menu (Figure 10-369).

Figure 10-369 Reset Date Filter action

By selecting an entry and showing the entries within in a certain period of time of this event To use this time frame filter, perform the following steps: Select an entry in the table. In the Actions menu, click Show entries within... and select minutes, hours, or days and finally select a value (Figure 10-370).

Chapter 10. SAN Volume Controller operations using the GUI

839

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-370 Show entries within... action

Tip: You can also access the Show entries within... action by right-clicking an entry. Your panel is now filtered based on the time frame (Figure 10-370 on page 840).

Figure 10-371 Time frame filtering

To disable this time frame filter, click Reset Date Filter in the Actions menu (Figure 10-372).

840

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-372 Reset Date Filter action

10.15 Configuration
In this section we describe how to configure different aspects of the SVC.

10.15.1 Configuring the Network


With the SVC, you can use both IP ports of each node. There are two active cluster ports on each node. We describe the two active cluster ports on each node in further detail in 2.6.1, Use of IP addresses and Ethernet ports on page 32.

Management IP addresses
In this section, we discuss the modification of management IP addresses. Management IP addresses can be defined for the system. The system supports one to four IP addresses. You can assign these addresses to two Ethernet ports and their backup ports. Multiple ports and IP addresses provide redundancy for the system in the event of connection interruptions. At any point in time, the system has an active management interface. Ethernet Port 1 must always be configured, and the use of Port 2 is optional. Configuring both ports provides redundancy for the Ethernet connections. If you have configured both ports and you cannot connect through one IP address, attempt to access the system through the alternate IP address. Both IPv4 and IPv6 address formats are supported. Ethernet ports can have either IPv4 addresses or IPv6 addresses, or both. Important: If you specify a new cluster IP address, the existing communication with the cluster through the GUI is lost. You need to relaunch the SAN Volume Controller Application from the GUI Welcome panel. You must use the new IP address to reconnect to the management GUI. When you reconnect, accept the new site certificate. Modifying the IP address of the cluster, although quite simple, requires reconfiguration for other items within the SVC environments, including reconfiguring the central administration GUI by adding the cluster again with its new IP address. Perform the following steps to modify the cluster IP addresses of our SVC configuration: 1. From the SVC Overview panel, select Settings and then Network. 2. In the left column, select Management IP Addresses.
Chapter 10. SAN Volume Controller operations using the GUI

841

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

3. The Management IP Addresses window opens (Figure 10-373 on page 842).

Figure 10-373 Modify management IP address

4. Click a port to configure the cluster's management IP address. Notice that you can configure both ports on the SVC node (Figure 10-374).

Figure 10-374 Modify management IP addresses

842

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

5. Depending on whether you select to configure an IPv4 or IPv6 cluster, there is different information to enter. For IPv4: Type an IPv4 address in the IP Address field. Type an IPv4 gateway in the Gateway field. Type an IPv4 Subnet Mask. For IPv6: Select the Show IPv6 button. Type an IPv6 prefix in the IPv6 Network Prefix field. The Prefix field can have a value of 0 to 127. Type an IPv6 address in the IP Address field. Type an IPv6 gateway in the Gateway field. 6. After the information is filled in, click OK to confirm the modification (Figure 10-374 on page 842).

10.15.2 Configuring the Service IP addresses


The service IP address is used to access the service assistant tool, which you can use to perform service-related actions on the node. All nodes in the cluster have different service addresses. A node that is operating in service state does not operate as a member of the cluster. Configuring this service IP is important because it will let you access the Service Assistant Tool. In case of an issue with a node, you can view a detailed status and error summary, and manage service actions on it. Perform the following steps to modify the service IP addresses of our SVC configuration: 1. From the SVC Overview panel, select Settings and then Network. 2. In the left column, select Service IP Addresses (Figure 10-375).

Figure 10-375 Service IP Addresses window

3. Select one node, then click the port you want to assign a service IP address (Figure 10-376 on page 844).

Chapter 10. SAN Volume Controller operations using the GUI

843

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-376 Configure Service IP window

4. Depending on whether you installed an IPv4 or IPv6 cluster, there is different information to enter. For IPv4: Type an IPv4 address in the IP Address field. Type an IPv4 gateway in the Gateway field. Type an IPv4 Subnet Mask. For IPv6: Select the Show IPv6 button. Type an IPv6 prefix in the IPv6 Network Prefix field. The Prefix field can have a value of 0 to 127. Type an IPv6 address in the IP Address field. Type an IPv6 gateway in the Gateway field. 5. After the information is filled in, click OK to confirm modification (Figure 10-377).

Figure 10-377 Service IP window

6. Repeat steps 3 and 4 for each node of your cluster.

10.15.3 iSCSI configuration


From the iSCSI panel, you can configure settings for the cluster to attach to iSCSI-attached hosts as shown in Figure 10-378 on page 845.

844

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-378 iSCSI Configuration

The following parameters can be updated: Cluster Name It is important to set the cluster name correctly because it is part of the iSCSI qualified name (IQN) for the node. Important: If you change the name of the cluster after iSCSI is configured, iSCSI hosts might need to be reconfigured. To change the cluster name, click the cluster name and specify the new name Cluster name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The node name can be between one and 63 characters in length. iSCSI Ethernet Ports iSCSI configuration can be set for each Ethernet ports. Perform the following steps to change an iSCSI IP: Click a port and, depending if you installed an IPv4 or IPv6 cluster, enter the appropriate information. For IPv4: enter an IP address, a gateway and a Subnet Mask. For IPv6: enter an IP prefix, an IP address and a gateway.

After the information is filled in, click OK to confirm modification. Important: When reconfiguring IP ports, be aware that you must reconnect already configured iSCSI connections if changes are made on the IP addresses of the nodes. iSCSI Aliases

Chapter 10. SAN Volume Controller operations using the GUI

845

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

An iSCSI alias is a user-defined name that identifies the node to the host. Perform the following steps to change an iSCSI alias: Click an iSCSI alias. Specify a name for it. Each node has a unique iSCSI name associated with two IP addresses. After the host has initiated the iSCSI connection to a target node, this IQN from the target node will be visible in the iSCSI configuration tool on the host. iSNS and CHAP You can specify the IP address for the iSCSI Storage Name Service (iSNS). Host systems use the iSNS server to manage iSCSI targets and for iSCSI discovery. You can also enable CHAP to authenticate the system and iSCSI-attached hosts with the specified shared secret. The CHAP secret is the authentication method that is used to restrict access for other iSCSI hosts to use the same connection. You can set the CHAP for the whole cluster under cluster properties or for each host definition. The CHAP must be identical on the server and the cluster/host definition. You can create an iSCSI host definition without using a CHAP.

10.15.4 Fibre Channel information


As shown in Figure 10-379, the Fibre Channel panel can be used to display the Fibre Channel connectivity between nodes and other storage systems and hosts that are attached through the Fibre Channel network. Filtering can be done by selecting one of the following fields: All nodes, storage systems, and hosts Cluster Nodes Storage Systems Hosts

Figure 10-379 Fibre Channel

846

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

10.15.5 Event notifications


SAN Volume Controller can use Simple Network Management Protocol (SNMP) traps, syslog messages, and Call Home email to notify you and the IBM Support Center when significant events are detected. Any combination of these notification methods can be used simultaneously. Notifications are normally sent immediately after an event is raised. However, there are events that can occur because of service actions that are being performed. If a recommended service action is active, these events are notified only if they are still unfixed when the service action completes.

10.15.6 Email notifications


The Call Home feature transmits operational and event-related data to you and IBM through a Simple Mail Transfer Protocol (SMTP) server connection in the form of an event notification email. When configured, this function alerts IBM service personnel about hardware failures and potentially serious configuration or environmental issues. Perform the following steps to configure email event notifications: 1. From the SVC Overview panel, select Settings and then Event Notifications. 2. In the left column, select Email. 3. Click Enable Email Event Notification (Figure 10-380).

Figure 10-380 Email Event Notification

4. A wizard appears (Figure 10-381). You must enter contact information to enable IBM Support personnel to contact this person to assist with problem resolution (Contact Name, Email reply Address, Machine Location and Phone numbers). Ensure that all contact information is valid, then click Next.

Figure 10-381 Define Company Contact information

Chapter 10. SAN Volume Controller operations using the GUI

847

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

5. On the next page (Figure 10-382), configure at least one email server that is used by your site and optionally enable inventory reporting. Enter a valid IP address and a server port for each server added. Ensure that the email servers are valid. Inventory reports allow IBM service personnel to proactively notify you of any known issues with your system. To activate it, enable the inventory reporting and choose a reporting interval in this window.

Figure 10-382 Configure Email Servers and Inventory Reporting window

6. Next (Figure 10-383), you can configure email addresses to receive notifications. It is advisable to configure an email address belonging to a support user with the error event notification type enabled, to notify IBM service personnel if an error condition occurs on your system. Ensure that all email addresses are valid.

Figure 10-383 Configure Email Addresses window

7. The last window (Figure 10-384 on page 849) displays a summary of your Email Event Notification wizard. Click Finish to complete the setup.

848

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-384 Email Event Notification Summary

8. The wizard is now closed. Additional information has been added to the panel as shown on Figure 10-385. You can edit or disable email notification from this window.

Figure 10-385 Configure Email Event Notification window configured

10.15.7 SNMP notifications


Simple Network Management Protocol (SNMP) is a standard protocol for managing networks and exchanging messages. The system can send SNMP messages that notify personnel about an event. You can use an SNMP manager to view the SNMP messages that SAN Volume Controller sends. You can configure an SNMP server to receive various informational, errors, or warning notifications entering the following information (see Figure 10-386 on page 850): IP Address The address for the SNMP server

Chapter 10. SAN Volume Controller operations using the GUI

849

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Server port The remote port number for the SNMP server. The remote port number must be a value between 1 - 65535. Community The SNMP community is the name of group to which devices and management stations that run SNMP belong. Event notifications Select Error if you want that the user receives messages about problems, such as hardware failures, that must be resolved immediately. Important: Navigate to Recommended Actions to run fix procedures on these notifications. Select Warning if you want that the user receives messages about problems and unexpected conditions. Investigate the cause immediately to determine any corrective action. Important: Navigate to Recommended Actions to run fix procedures on these notifications. Select Info if you want that the user receives messages about expected events. No action is required for these events.

Figure 10-386 SNMP configuration

To remove an SNMP Server, use the To add another SNMP Server, use the

button. button.

Syslog notifications
The syslog protocol is a standard protocol for forwarding log messages from a sender to a receiver on an IP network. The IP network can be either IPv4 or IPv6. The system can send syslog messages that notify personnel about an event. 850

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

You can configure a syslog server to receive log messages from various systems and stores them in a central repository entering the following information (see Figure 10-387): IP Address The address for the syslog server Facility The facility determines the format for the syslog messages and can be used to determine the source of the message. Message format The message format depends on the facility. The system can transmit syslog messages in two formats: Concise message format provides standard detail on the event. Expanded format provides more details about the event. Event notifications Select Error if you want that the user receives messages about problems, such as hardware failures, that must be resolved immediately. Important: Navigate to Recommended Actions to run fix procedures on these notifications. Select Warning if you want that the user receives messages about problems and unexpected conditions. Investigate the cause immediately to determine any corrective action. Important: Navigate to Recommended Actions to run fix procedures on these notifications. Select Info if you want that the user receives messages about expected events. No action is required for these events.

Figure 10-387 Syslog configuration

To remove a syslog server, use the To add another syslog server, use the

button. button.

The syslog messages can be sent in either compact message format or expanded message format. Example 10-1 on page 852 shows a compact format syslog message.

Chapter 10. SAN Volume Controller operations using the GUI

851

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Example 10-1 Compact syslog message example

IBM2145 #NotificationType=Error #ErrorID=077001 #ErrorCode=1070 #Description=Node CPU fan failed #ClusterName=SVCCluster1 #Timestamp=Wed Jul 02 08:00:00 2008 BST #ObjectType=Node #ObjectName=Node1 #CopyID=0 #ErrorSequenceNumber=100 Example 10-2 shows a expanded format syslog message.
Example 10-2 Full format syslog message example

IBM2145 #NotificationType=Error #ErrorID=077001 #ErrorCode=1070 #Description=Node CPU fan failed #ClusterName=SVCCluster1 #Timestamp=Wed Jul 02 08:00:00 2008 BST #ObjectType=Node #ObjectName=Node1 #CopyID=0 #ErrorSequenceNumber=100 #ObjectID=2 #NodeID=2 #MachineType=21454F2#SerialNumber=1234567 #SoftwareVersion=5.1.0.0 (build 8.14.0805280000)#FRU=fan 24P1118, system board 24P1234 #AdditionalData(0->63)=00000000210000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000000000000000000000000#Additional Data(64-127)=000000000000000000000000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000000000000

10.15.8 Using the General panel


Use the General panel to change time and date settings, work with license options, download configuration settings, download software upgrade packages, and change management GUI preferences.

10.15.9 Date and Time


Perform the following steps to configure time settings: 1. From the SVC Overview panel, select Settings and then General. 2. In the left column, select Date and Time (Figure 10-388).

Figure 10-388 Date and Time window

852

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

3. From this panel, you can modify: The time zone Select a time zone for your cluster using the drop-down list. The date and time Two options are available: If you are not using a Network Time Protocol (NTP) server, select the Set Date and Time button and then manually enter the date and the time for your cluster as shown in Figure 10-389. You can also use the Use Browser Setting button to automatically adjust date and time of your SVC cluster with your local workstation date and time.

Figure 10-389 Set Date and Time window

If you are using a Network Time Protocol (NTP) server, select the Set NTP Server IP Address button and then enter the IP address of the NTP server as shown in Figure 10-390.

Figure 10-390 Set NTP Server IP Address window

4. Finally, click Save to validate your changes.

10.15.10 Licensing
Perform the following steps to configure licensing settings: 1. From the SVC Overview panel, select Settings and then General. 2. In the left column, select Licensing (Figure 10-391 on page 854).

Chapter 10. SAN Volume Controller operations using the GUI

853

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-391 Licensing window

3. Set the licensing values for the IBM System Storage SAN Volume Controller for the following elements: Virtualization Limit Enter the capacity of the storage that will be virtualized by this cluster. FlashCopy Limit Enter the capacity that is available for FlashCopy mappings. Important: The used capacity for FlashCopy mapping is the sum of all of the volumes that are the source volumes of a FlashCopy mapping. Global and Metro Mirror Limit Enter the capacity that is available for Metro Mirror and Global Mirror relationships. Important: The used capacity for Global Mirror and Metro Mirror is the sum of the capacities of all of the volumes that are in a Metro Mirror or Global Mirror relationship; both master and auxiliary volumes are counted.

10.15.11 Upgrading software


See 10.16, Upgrading SVC software on page 855, for information about this topic.

10.15.12 Setting GUI Preferences


Perform the following steps to configure licensing settings: 1. From the SVC Overview window, select Settings General. 2. In the left column, select GUI Preferences (Figure 10-392 on page 855).

854

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-392 GUI Preferences window

3. From here you can configure the following elements: Refresh GUI Objects This action causes the GUI to refresh every one of its views. It clears the GUI cache. The GUI will look up every object again. Important: This is a support only action button. Restore Default Browser Preferences This action deletes all GUI preferences that are stored in the browser and restores default preferences. Table Selection If selected, this action shows Select/Deselect All in each table in the cluster (Figure 10-393).

Figure 10-393 Select/Deselect All

Navigation If selected, this action shows navigation as tabs when not in low graphics mode (Figure 10-394 on page 855).

Figure 10-394 Tabs example

10.16 Upgrading SVC software


In this section we explain the operations to be performed to upgrade your SVC software from 6.1.0.0 to a new version 6.3.0.0.
Chapter 10. SAN Volume Controller operations using the GUI

855

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

The format for the software upgrade package name ends in four positive integers separated by dots. For example, a software upgrade package might have the name IBM_2145_INSTALL_6.3.0.0.

10.16.1 Precautions before upgrade


Take the following precautions before attempting an upgrade. Important: Before attempting any SVC code update, read and understand the SVC concurrent compatibility and code cross-reference matrix. Go to the following site and click the link for Latest SAN Volume Controller code: http://www-1.ibm.com/support/docview.wss?uid=ssg1S1001707 During the upgrade, each node in your SVC cluster will be automatically shut down and restarted by the upgrade process. Because each node in an I/O Group provides an alternate path to volumes, use Subsystem Device Driver (SDD) to make sure that all I/O paths between all hosts and SANs are working. If you have not performed this check, then certain hosts might lose connectivity to their volumes and experience I/O errors when the SVC node that is providing that access is shut down during the upgrade process. You can check the I/O paths by using SDD datapath query commands. Double-check that your uninterruptible power supply unit power configuration is also set up correctly (even if your cluster is running without problems). Specifically, double-check these areas: Ensure that your uninterruptible power supply units are all getting their power from an external source, and that they are not daisy-chained. Make sure that each uninterruptible power supply unit is not supplying power to another nodes uninterruptible power supply unit. Ensure that the power cable, and the serial cable coming from the back of each node, goes back to the same uninterruptible power supply unit. If the cables are crossed and are going back to separate uninterruptible power supply units, then during the upgrade, as one node is shut down, another node might also be mistakenly shut down.

10.16.2 SVC software upgrade test utility


The SVC software upgrade test utility is an SVC software utility that checks for known issues that can cause problems during an SVC software upgrade. It is available from the following location: http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S4000585 You can use the svcupgradetest utility to check for known issues that might cause problems during a SAN Volume Controller software upgrade. The software upgrade test utility can be downloaded in advance of the upgrade process, or it can be downloaded and run directly during the software upgrade, as guided by the upgrade wizard. You can run the utility multiple times on the same cluster to perform a readiness check in preparation for a software upgrade. We strongly advise running this utility for a final time immediately prior to applying the SVC upgrade to ensure that there have not been any new releases of the utility since it was originally downloaded. 856
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

The installation and usage of this utility are nondisruptive and do not require restarting any SVC nodes, so there is no interruption to host I/O. The utility is only installed on the current configuration node. System administrators must continue to check whether the version of code that they plan to install is the latest version. You can obtain information about the latest information at this website: http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1001707#_Latest_SAN_Volu me_Controller%20Code This utility is intended to supplement rather than duplicate the existing tests that are carried out by the SVC upgrade procedure (for example, checking for unfixed errors in the error log).

10.16.3 Upgrade procedure


To upgrade the SVC cluster software, perform the following steps: 1. With a supported web browser, put your cluster IP address in the following link; the SVC GUI login window will then display as shown in Figure 10-395 on page 857. http://<your cluster ip address>/service/

Figure 10-395 SVC GUI login window

2. Login with your superuser/password; the SVC management home page will display. From there, go to the Settings General menu (Figure 10-396) and click Advanced.

Chapter 10. SAN Volume Controller operations using the GUI

857

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-396 Configuration menu

3. In the Advanced menu, click the Upgrade Software item; the window shown in Figure 10-397 on page 858 will display.

Figure 10-397 Upgrade Software

From the window shown in Figure 10-397, you can click the following buttons: Check for updates: Use this to check, on the IBM website, whether there is an SVC software version available that is newer than the version you have installed in your SVC. You need an Internet connection to perform this check. Launch Upgrade Wizard: Use this to launch the software upgrade process. 4. Click Launch Upgrade Wizard to start the upgrade process; you will be redirected to the window shown in Figure 10-398.

Figure 10-398 Upgrade Package

858

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

From the window shown in Figure 10-398 you can download the Upgrade Test Utility from the IBM website, or you can browse and upload the Upgrade Test Utility from the location where you saved it, as shown in Figure 10-399 on page 859.

Figure 10-399 Upload Test Utility

5. When the Upgrade Test Utility has been uploaded, the window shown in Figure 10-400 displays.

Figure 10-400 Upload completed

6. When you click Next (Figure 10-400), the Upgrade Test Utility will be applied. You will be redirected to the window shown in Figure 10-401.

Chapter 10. SAN Volume Controller operations using the GUI

859

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-401 Upgrade Test Utility applied

7. Click Close (Figure 10-401 on page 860), and you will be redirected to the window shown in Figure 10-402. From here you can run your Upgrade Test Utility for the level you need.

Figure 10-402 Run Upgrade Test Utility

8. Click Next (Figure 10-402), and you will be redirected to the window shown in Figure 10-403. At this point the Upgrade Test Utility will run. You will see the suggested actions (if any are needed) or simply the window shown in Figure 10-403.

Figure 10-403 Upgrade Test Utility result

9. Click Next (Figure 10-403) to start the SVC software upload procedure, and you will be redirected to the window shown in Figure 10-404.

860

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-404 Upgrade Package

From the window shown in Figure 10-404 you can download the SVC software upgrade package directly from the IBM website, or you can browse and upload the software upgrade package from the location where you saved it, as shown in Figure 10-405 on page 861.

Figure 10-405 Upload SVC software upgrade package

Click Open (Figure 10-405), and you will be redirected on the windows shown in Figure 10-406 and Figure 10-407.

Figure 10-406 Uploading SVC software package

Figure 10-407 shows that the SVC package uploading has completed.

Chapter 10. SAN Volume Controller operations using the GUI

861

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-407 Uploading SVC software package complete

10.Click Next and you will be redirected to the window shown in Figure 10-408.

Figure 10-408 System ready for upgrade

11.When you click Finish (Figure 10-408 on page 862), the SVC software upgrade will start and you will be redirected to the window shown in Figure 10-409.

Figure 10-409 Upgrading a node

When you click Close (Figure 10-409), the warning message shown in Figure 10-410 will be displayed.

Figure 10-410 Warning message

12.When you click OK (Figure 10-410), you will have completed upgrading the SVC software. Now you are redirected to the window shown in Figure 10-411. 862
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-411 Upgrade in progress

After a few minutes the window shown in Figure 10-412 on page 863 will display, showing that the first node has been upgraded.

Figure 10-412 First node is upgraded

Now the process will install the new SVC software version on the remaining node in the cluster. You can check the upgrade status as shown in Figure 10-412. 13.After all nodes have been rebooted, you will have completed the SVC software upgrade task.

10.17 Service Assistant with the GUI


SVC V6.1 introduced a new method for performing service tasks on the system. In addition to being able to perform service tasks from the front panel, you can now also service a node through an Ethernet connection using either a web browser or command line interface. The web browser runs a new service application known as the Service Assistant. Almost of all the functions that were previously possible through the front panel are now available from the Ethernet connection, offering the benefits of an easier-to-use interface that can be used remotely from the cluster. In this section we describe useful tasks you can perform with the new Service Assistant application using a web browser GUI.

Chapter 10. SAN Volume Controller operations using the GUI

863

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Attention: We do not detail certain actions because those actions must be run under the direction of IBM Support. Do not try to perform actions of this kind without IBM Support. direction. To be able to use the SVC Service Assistant application with the GUI, you must first have a service IP address configured for each node of your cluster. For more information about how to set the SVC service IP address, see 4.4.3, Configuring the Service IP Addresses on page 131. With a supported web browser, address the following link and you will reach the Service Assistant login window (Figure 10-413 on page 864). https://<your service ip address>/service/

Figure 10-413 Service Assistant login page

Login with your superuser password and you will reach the Service Assistant Home page (Figure 10-414).

864

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-414 Service Assistant Home page

From the Service Assistant Home page (Figure 10-414) you can obtain an overview of your SVC cluster and the node status. You can view a detailed status and error summary and manage service actions for the current node. The current node is the node on which service-related actions are performed. The connected node displays the Service Assistant and provides the interface for working with other nodes on the system. To manage a different node, select the radio button on the left of your node panel name, and the details for the selected node will be shown. Using the pull-down menu in the Service Assistant Home page, you can select which action you want to execute in the selected node (Figure 10-415).

Chapter 10. SAN Volume Controller operations using the GUI

865

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-415 Service Assistant Home page - possible actions

As shown in Figure 10-415, for the selected node it is possible to: Enter in Service State Power off Restart Reload

10.17.1 Placing an SVC node into Service State


To place a node into a Service State, select the node where the action will be performed from the Service Assistant Home Page. From the pull-down menu, select Enter Service State and then click GO (Figure 10-415). A confirmation window displays (Figure 10-416 on page 867). Click OK.

866

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-416 Service State confirmation window

At this point the information window displays (Figure 10-417). Wait until the node is available, then click OK.

Figure 10-417 Action completed window

Now you will be returned to the Service Assistant Home Page. You will be able to see the status of the node just entered into Service State (Figure 10-418 on page 868). Also note an event code 690, which means several resources have entered a Service State.

Chapter 10. SAN Volume Controller operations using the GUI

867

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-418 Node in service state

Now you can have different choices from the Service Assistant Home Page pull-down menu as shown in Figure 10-419. Hold in Service State Power off Restart Reload

Figure 10-419 Possible actions

10.17.2 Exiting an SVC node from Service State


To exit a node from Service State, select the node where the action will be performed from the Service Assistant Home Page. From the pull-down menu, select Exit Service State then click GO (Figure 10-420 on page 869).

868

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-420 Exit Service State action

A confirmation window will display (Figure 10-421). Then click OK.

Figure 10-421 Confirmation window

At this point the information window for your action will display (Figure 10-417 on page 867). Wait until the node is available, then click OK. When the node is available, the window shown in Figure 10-422 displays.

Figure 10-422 Exiting from Service Status

You can see that the node is starting and the event shown in the Error column is simply a regular message. Click Refresh until you can see your node is active and no event is displayed in the Error column. In our example we used the Exit from Service State action from the Service Assistant Home Page, but it is also possible to exit from a Service State using the restart action.

Chapter 10. SAN Volume Controller operations using the GUI

869

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

10.17.3 Rebooting an SVC node


To reboot a node, select the node where the action will be performed from the Service Assistant Home Page. From the pull-down menu, select Reboot and then click GO (Figure 10-423).

Figure 10-423 Reboot action

A confirmation window is displayed (Figure 10-424).

Figure 10-424 Confirmation window

On the next confirmation window, wait until the operation completes successfully and then click OK (Figure 10-425 on page 871).

870

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-425 Operation completed

From the Service Assistant Home Page, notice that the node that you just rebooted has disappeared (Figure 10-426). This node will still be visible in an Offline State from the GUI or from the SVC command line interface.

Figure 10-426 Only one node remaining

The node you just rebooted has to complete before it becomes visible. Normally a node reboot takes about 14 minutes.

10.17.4 Collect Logs page


With the Service Assistant application you can create and download a package of log and trace files, or download existing log files from the node. The support package, which is also called SNAP files, can be used by support personnel to understand problems on the system. Unless advised by support, collect the latest statesave. Figure 10-427 shows the Service Assistant page, where it is possible to collect logs.

Figure 10-427 Collect Logs page

To create a support package with the last statesave, select the related option, click Create, and download. The page shown in Figure 10-428 on page 872 is displayed.

Chapter 10. SAN Volume Controller operations using the GUI

871

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-428 Action completed page

You will be asked where you want to save the support package (Figure 10-429).

Figure 10-429 Save page

10.17.5 Manage Cluster page


In this page you can see cluster configuration data for the current node (Figure 10-430).

Figure 10-430 Manage Cluster page

872

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

10.17.6 Recover Cluster


You can recover the entire cluster using the cluster recovery procedure (also known as T3 recovery) if cluster data has been lost from all nodes. The cluster recovery procedure recreates the cluster using saved configuration data. However, it might not be able to restore all volume data. This action cannot be performed on an active node. To recover the cluster, the node must either be a candidate node or in service state. Before attempting the cluster recovery procedure, investigate the cause of the cluster failure and attempt to resolve those issues using other service procedures. Attention: We do not detail this procedure because the recover cluster action must be run under the direction and guidance of IBM Support. Do not attempt this action unless ordered to by IBM Support. We include this description simply to make you aware that there is a recover cluster process. The cluster recovery procedure is a two-stage process. 1. Click Prepare for Recovery to search the system for the most recent backup file and quorum drive. If this step is successful, the Recover Cluster panel displays the details of the backup file and quorum drive that was found. Verify the dates and times for these file are the most recent. 2. If you are satisfied with these files, click Recover to recreate the cluster. If the backup file or quorum drive are not suitable for the recovery, exit the task by selecting a different menu option. Note: If the connected node and the current node are the same, the connection to the node can be lost. Figure 10-431 shows the Recover Cluster page.

Figure 10-431 Recover Cluster page

10.17.7 Reinstall software


You can either install a package from the support site or rescue the software from another node that is visible on the fabric. When the node is added to a cluster, the software level on the node is updated to match the level of the cluster software. This action cannot be performed on an active node.

Chapter 10. SAN Volume Controller operations using the GUI

873

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

To re-install the software, the node must either be a candidate node or in service state. During the re-installation, the node becomes unavailable. If the connected node and the current node are the same, the connection to the Service Assistant might be lost. Figure 10-432 shows the Re-install software page. On this page, clicking Check for software updates redirects you to the IBM website, where you will find any available update for the SVC software as shown in the following link. http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1001707#_Latest_SAN_Volu me_Controller%20Code

Figure 10-432 Re-install Software page

Attention: We do not detail this procedure because the reinstallation of software action must be run under the direction of IBM support. Do not try to perform this action unless guided by IBM support.

10.17.8 Upgrade Manually


During a standard upgrade procedure, the cluster upgrades each of the nodes systematically. The standard upgrade procedure is the best practice method for upgrading software on nodes. However, to provide more flexibility in the upgrade process, you can also upgrade each node individually. When upgrading the software manually, you remove a node from the cluster, upgrade the software on the node, and return the node to the cluster. You repeat this process for the remaining nodes until the last node is removed from the cluster. At this point the remaining nodes switch to running the new software. When the last node is returned to the cluster, it upgrades and runs the new level of software. This action cannot be performed on an active node. To upgrade software manually, the nodes must either be candidate nodes or in service state. During this procedure, every node must be upgraded to the same software level. You cannot interrupt the upgrade and switch to installing a different software level. During the upgrade, the node becomes unavailable. If the connected node and the current node are the same, the connection to the service assistant might be lost. Figure 10-433 on page 875 shows the Upgrade Manually page.

874

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-433 Upgrade Manually page

10.17.9 Modify WWNN


You can verify that the WWNN for node is consistent. Only change the WWNN if directed to do so in the service procedures. This action cannot be performed on an active node. To modify the WWNN, the node must either be a candidate node or in service state. Attention: Only change the WWNN if directed to do so in the service procedures. Figure 10-434 shows the Modify WWNN page.

Figure 10-434 Modify WWNN page

10.17.10 Change Service IP


You can set the service IP address assigned to Ethernet port 1 for the current node. This IP address is used to access the service assistant and the service command line. All nodes in the cluster have different service addresses. If the connected node and the current node are the same, the connection to the service assistant might be lost. To regain access to the service assistant, log in to the service assistant using the new service IP address. Figure 10-435 on page 876 shows the Change Service IP page.

Chapter 10. SAN Volume Controller operations using the GUI

875

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

Figure 10-435 Change Service IP page

10.17.11 Configure CLI access


Use this panel if a valid superuser SSH key is not available for either a node that is currently in service state or a candidate node. The SSH key can be used to temporarily gain access to the command-line interface or to use secure copy tools, such as scp. The key is removed when the node is restarted or rejoins a cluster. Figure 10-436 shows the Configure CLI access page.

Figure 10-436 Config CLI access page

10.17.12 Restart Service


With the Service Assistant application, you can restart any of the following services: CIMOM Web server Easy Tier Service Location Figure 10-437 on page 877 shows the Restart Service page.

876

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 10 GUI Operations Torben.fm

Figure 10-437 Restart Service page

Chapter 10. SAN Volume Controller operations using the GUI

877

7933 10 GUI Operations Torben.fm

Draft Document for Review January 17, 2012 6:10 am

878

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 11 APPENDIX A Performance AB-Mark.fm

Appendix A.

Performance data and statistics gathering


In this appendix we provide a brief overview of the performance analysis capabilities of SVC 6.3. We also describe a method you can use to collect and process SVC performance statistics. It is beyond the scope of this book to provide an in-depth understanding of performance statistics or explain how to interpret them. For a more comprehensive look at the performance of the IBM System Storage SAN Volume Controller (SVC), see SAN Volume Controller Best Practices and Performance Guidelines, SG24-7521, which is available at this site: http://www.redbooks.ibm.com/abstracts/sg247521.html?Open

Copyright IBM Corp. 2011. All rights reserved.

879

7933 11 APPENDIX A Performance AB-Mark.fm

Draft Document for Review January 17, 2012 6:10 am

SVC performance overview


Although storage virtualization with SVC provides many administrative benefits, it can also provide a substantial increase in performance for a variety of workloads. The SVCs caching capability and its ability to stripe volumes across multiple disk arrays can provide a significant performance improvement over what can otherwise be achieved when using midrange disk subsystems. To ensure that the desired performance levels of your system are maintained, monitor performance periodically to provide visibility to potential problems that exist or are developing so they can be addressed in a timely manner.

Performance considerations
When designing an SVC storage infrastructure or maintaining an existing infrastructure, you need to consider many factors in terms of their potential impact on performance. These factors include, but are not limited to: dissimilar workloads competing for the same resources; overloaded resources; insufficient resources available; poor performing resources; and similar performance constraints. There are a few high-level rules you should always keep in-mind when designing your SAN and SVC layout: Host-to-SVC ISL Oversubscription: This is the most significant I/O load across ISLs. The recommendation is to maintain a maximum of 7-to-1 oversubscription. Going higher is possible, but tends to lead to I/O bottlenecks. This also assumes a core-edge design, where the hosts are on the edge and the SVC is on the Core. Storage-to-SVC ISL Oversubscription: This is the second most significant I/O load across ISLs. The maximum oversubscription is 7-to-1. Going higher is not supported. Again, this assumes a multiple switch SAN fabric design. Node-to-Node ISL Oversubscription: This is the least significant load of the three possible oversubscription bottlenecks. In standard setups this can be ignored; while its not entirely negligible, it does not contribute significantly to ISL load. However it is being mentioned here with regard to the split cluster capability that was made available with 6.3.0. When running in this manner, the number of ISL links will become much more important. As with the Storage-to-SVC ISL Oversubscription, this also has a requirement for a maximum of 7-to-1 oversubscription. Exercise caution and careful planning when you determine the number of ISLs to implement. If you need additional assistance, it is recommended you contact your IBM Representative and request technical assistance. ISL Trunking/PortChanneling: For the best performance and availability, you are highly recommended to use ISL Trunking/PortChanneling. The idea here is that independent ISL links can easily become overloaded and turn into performance bottlenecks. Bonded or Trunked ISLs automatically share load and provide better redundancy in the case of a failure. Number of Paths per Host Multipath Device: The maximum supported number of paths per multipath device visible on the host is 8. While SDDPCM and its relatives and indeed most vendor multipathing software will support more, the SVC expects a maximum of 8. In general you will only see performance impact from more than that, and while the SVC will work with more than 8, its technically unsupported. Do not Intermix dissimilar Array Types or Sizes: While the SVC supports intermix of differing storage within Storage Pools, it is best to always use the same array model, same RAID mode, same RAID size (RAID-5 6+P+S does not mix well with RAID-6 14+2), and same drive speeds. 880
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 11 APPENDIX A Performance AB-Mark.fm

Rules and guidelines are no substitution for monitoring performance. Monitoring performance can both provide a validation that design expectations are met and identify opportunities for improvement.

SVC performance perspectives


The SAN Volume Controller (SVC) is a combination product, that consists of Software and Hardware. The Software was developed by IBMs Research Group and was designed to run on commodity hardware (mass produced intel-based CPUs with mass produced expansion cards), while providing distributed cache and a scalable cluster architecture. One of the main goals of this design was to be able to leverage refreshes in hardware. Currently, the SVC cluster is scalable up to eight nodes and these nodes can be swapped for newer hardware while online. This provides a great investment value, as the nodes are relatively cheap and a node swap can be done online. This provides an instant performance boost with no license changes. Newer nodes, such as CF8 or CG8 models, which dramatically increased cache from 8GB to 24GB per node provide an extra benefit on top of the typical refresh cycle. The link below provides the node replacement/swap and node addition instructions. http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD104437 The performance is near linear when adding additional nodes into the cluster until performance eventually becomes limited by attached components. Also, while virtualization with the SVC provides significant flexibility in terms of the components used, it does not diminish the necessity of designing the system around the components so that it can deliver the desired level of performance. The key item for planning is your SAN layout. Different switch vendors will have slightly different planning requirements, but the end goal is you always want to maximize the bandwidth available to the SVC ports. The SVC is one of the few devices capable of driving ports to their limits on average, so its imperative that a lot of thought be put into planning the SAN layout. Essentially, SVC performance improvements are gained by spreading the workload across a greater number of back-end resources and additional caching provided by the SVC Cluster. Eventually, however, the performance of individual resources will become the limiting factor.

Performance monitoring
This section highlights several performance monitoring techniques.

Collecting performance statistics


SVC is constantly collecting performance statistics. The default frequency by which files are created is at five-minute intervals; prior to 4.3.0 the default was fifteen-minute intervals, with a supported range of 15 to 60 minutes. The collection interval can be changed using the svctask startstats command. The statistics files (VDisk, MDisk and Node) are saved at the end of the sampling interval and a maximum of 16 files (each) are stored before they are overlaid in a rotating log fashion. This provides statistics for the most recent 80-minute period if using the default five minutes sampling interval. The SVC supports user-defined sampling intervals of from 1 to 60 minutes. The maximum space required for a performance statistics file is 1,153,482 bytes. There can be up to 128 (16 per each of the 3 types across 8 nodes) different files across 8 SVC nodes. This makes the total space requirement a maximum of 147,645,694 bytes for all performance statistics from all nodes in an SVC Cluster. Make note of this when in time-critical situations; the size required is not otherwise important as the SVC node hardware is more than capable.
Appendix A. Performance data and statistics gathering

881

7933 11 APPENDIX A Performance AB-Mark.fm

Draft Document for Review January 17, 2012 6:10 am

You can define the sampling interval by using the svctask startstats -interval 2 command to collect statistics at two (2) minute intervals; see 9.8.7, Starting statistics collection on page 524. Note: While more frequent collection intervals provide a better detailed view of whats happening within the SVC, it shortens the amount of time historical data is available on the SVC. For example, instead of an 80-minute period of data with the default 5-minute interval, if you were to adjust to 2-minute intervals as above, youd have a 32-minute period instead. Since SVC 5.1.0, cluster-level statistics are no longer supported. Instead, use the per node statistics which are collected. The sampling of the internal performance counters is coordinated across the cluster (by the config node) so that when a sample is taken, all nodes sample their internal counters at the same time. It is important to collect all files from all nodes for a complete analysis. Tools such as TPC will perform this rather intensive data collection on your behalf.

Statistics file naming


The files that are generated are written to the /dumps/iostats/ directory. The file name is of the format: Nm_stats_<node_frontpanel_id>_<date>_<time> for MDisk statistics Nv_stats_<node_frontpanel_id>_<date>_<time> for vdisks statistics Nn_stats_<node_frontpanel_id>_<date>_<time> for node statistics Nd_stats_<node_frontpanel_id>_<date>_<time> for disk drive statistics, not used for SVC The node_frontpanel_id is of the node on which the statistics were collected. The date is in the form <yymmdd> and the time is in the form <hhmmss>. Following is an example of an MDisk statistics filename:
Nm_stats_104603_111003_094739

Example A-1 shows typical MDisk, volume, node, and disk drive statistics file names.
Example A-1 Filename of per node statistics IBM_2145:ITSO_SVC3:superuser>svcinfo lsiostatsdumps id iostat_filename 0 Nn_stats_104603_111003_094739 1 Nd_stats_104603_111003_094739 2 Nv_stats_104603_111003_094739 3 Nm_stats_104603_111003_094739 4 Nn_stats_104603_111003_100238 5 Nv_stats_104603_111003_100238 6 Nm_stats_104603_111003_100238 7 Nd_stats_104603_111003_100238 8 Nm_stats_104603_111003_101736 9 Nv_stats_104603_111003_101736 10 Nd_stats_104603_111003_101736 11 Nn_stats_104603_111003_101736 12 Nn_stats_104603_111003_103235 13 Nm_stats_104603_111003_103235 14 Nv_stats_104603_111003_103235 15 Nd_stats_104603_111003_103235

882

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 11 APPENDIX A Performance AB-Mark.fm

16 Nn_stats_104603_111003_104734

Tip: The performance statistics files can be copied from the SVC nodes to a local drive on your workstation using the pscp.exe (included with PuTTY) from an MS-DOS command line, as shown in this example: C:\Program Files\PuTTY>pscp -unsafe -load ITSO-SVC3 admin@10.18.229.81:/dumps/iostats/* c:\statsfiles Use the -load parameter to specify the session that is defined in PuTTY. Specify the -unsafe parameter when you use wildcards. The performance statistics files are in XML format. They can be manipulated using various tools and techniques. An example of a tool that you can use to analyze these files is the SVC Performance Monitor (svcmon). Note: The svcmon tool is not an officially supported tool. It is provided on an as is basis. You can obtain this tool from the following website: http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS3177 Figure A-1 shows an example of the type of chart that you can produce using the SVC performance statistics.

Appendix A. Performance data and statistics gathering

883

7933 11 APPENDIX A Performance AB-Mark.fm

Draft Document for Review January 17, 2012 6:10 am

Figure A-1 Spreadsheet example

Real-Time Performance Monitoring


Starting with version 6.2.0, SVC supports real-time performance monitoring. Real-time performance statistics provide short-term status information for the SAN Volume Controller system. The statistics are shown as graphs in the management GUI or can be also viewed from the CLI. With system-level statistics, you can quickly view CPU utilization, bandwidth of volumes, interfaces, and MDisks. Each of these graphs displays the current bandwidth in either megabytes per second or I/Os per second, as well as a view of bandwidth over time. Each node collects various performance statistics, mostly at five-second intervals and its available from the config node in a clustered environment. This can help you determine the performance impact of a specific node. As with system statistics, node statistics help you to evaluate whether the node is operating within normal performance metrics. Real-time performance monitoring gathers the following system level performance statistics: CPU utilization Port utilization and I/O rates Volume and MDisk I/O rates Bandwidth Latency Real-time statistics is not a configurable option and it cannot be disabled.

884

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 11 APPENDIX A Performance AB-Mark.fm

Real-Time performance monitoring with the CLI


The following commands are available for monitoring the statistics through the CLI: lsnodestats and lssystemstats. Next we will show examples on how to use them. The lsnodestats command provides performance statistics for the nodes that are part of a clustered system as shown in the Example A-2 (note that the output is truncated and only shows some of the available statistics). You can also specify a node name in the command to limit the output for a specific node.
Example A-2 lsnodestats command output

IBM_2145:ITSO_SVC3:superuser>lsnodestats node_id node_name stat_name stat_current 1 ITSOSVC3N1 cpu_pc 1 1 ITSOSVC3N1 fc_mb 0 1 ITSOSVC3N1 fc_io 1724 ... 2 ITSOSVC3N2 cpu_pc 1 2 ITSOSVC3N2 fc_mb 0 2 ITSOSVC3N2 fc_io 1689 ...

stat_peak 2 9 1799 1 0 1770

stat_peak_time 111003154220 111003154220 111003153930 111003154246 111003154246 111003153857

The previous example shows statistics for the two nodes members of cluster ITSO_SVC3, nodes ITSOSVC3N1 and ITSOSVC3N2. For each of this nodes the following columns are displayed: stat_name: Provides the name of the statistic field stat_current: The current value of the statistic field stat_peak: The peak value of the statistic field in the last five minutes stat_peak_time: The time that the peak occurred

On the other side, the lssystemstats command lists the same set of statistics listed with the lsnodestats but representative for all nodes in the cluster. The values for these statistics are calculated from the node statistics values in the following way: Bandwidth: Sum of bandwith of all nodes Latency: Average latency for the cluster. This is calculated using data from the whole cluster, not an average of the single node values IOps: Total IOps of all nodes CPU percentage: Average CPU percentage of all nodes Example A-3 shows the resulting output of the lssystemstats command.
Example A-3 lssystemstats command output

IBM_2145:ITSO_SVC3:superuser>lssystemstats stat_name stat_current stat_peak stat_peak_time cpu_pc 1 1 111003160859 fc_mb 0 0 111003160859 fc_io 1291 1420 111003160504 ...

Table A-1 has a brief description of each of the statistics presented by the lssystemstats and lsnodestats commands.

Appendix A. Performance data and statistics gathering

885

7933 11 APPENDIX A Performance AB-Mark.fm

Draft Document for Review January 17, 2012 6:10 am

Table A-1 lssystemstats and lsnodestats statistics Field name descriptions Field name cpu_pc fc_mb fc_io sas_mb sas_io iscsi_mb iscsi_io write_cache_pc total_cache_pc vdisk_mb vdisk_io vdisk_ms mdisk_mb mdisk_io mdisk_ms drive_mb drive_io drive_ms vdisk_w_mb vdisk_w_io vdisk_w_ms mdisk_w_mb mdisk_w_io mdisk_w_ms drive_w_mb drive_w_io drive_w_ms vdisk_r_mb vdisk_r_io vdisk_r_ms mdisk_r_mb mdisk_r_io mdisk_r_ms Unit Percentage MB/s IO/s MB/s IO/s MB/s IO/s Percentage Percentage MB/s IO/s Milliseconds MB/s IO/s Milliseconds MB/s IO/s Milliseconds MB/s IO/s Milliseconds MB/s IO/s Milliseconds MB/s IO/s Milliseconds MB/s IO/s Milliseconds MB/s IO/s Milliseconds Description Utilization of node CPUs Fibre-channel bandwidth Fibre-channel throughput SAS bandwidth SAS throughput iSCSI bandwidth iSCSI throughput Write cache fullness. Updated every ten seconds. Total cache fullness. Updated every ten seconds. Total VDisk bandwidth Total VDisk throughput Average VDisk latency MDisk (SAN and RAID) bandwidth MDisk (SAN and RAID) throughput Average MDisk latency Drive bandwidth Drive throughput Average drive latency VDisk write bandwidth VDisk write throughput Average VDisk write latency MDisk (SAN and RAID) write bandwidth MDisk (SAN and RAID) write throughput Average MDisk write latency Drive write bandwidth Drive write throughput Average drive write latency VDisk read bandwidth VDisk read throughput Average VDisk read latency MDisk (SAN and RAID) read bandwidth MDisk (SAN and RAID) read throughput Average MDisk read latency

886

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 11 APPENDIX A Performance AB-Mark.fm

Field name drive_r_mb drive_r_io drive_r_ms

Unit MB/s IO/s Milliseconds

Description Drive read bandwidth Drive read throughput Average drive read latency

Real-Time performance monitoring with the GUI


The real-time statistics are also available from the SVC GUI. Navigate to Monitoring Performance as shown in Figure A-2 on page 887 to open the Performance monitoring window.

Figure A-2 SVC Monitoring menu

The performance monitoring window, shown in Figure A-3 on page 888, is divided in four sections that provide utilization views for the following resources: CPU utilization: shows the overall CPU usage % Volumes: shows the overall volumes utilization with the following fields: Read Write Read latency Write latency Interfaces: shows overall statistics for each of the available interfaces: Fibre Channel iSCSI SAS MDidsks: shows the following overall statistics for the mdisks: Read Write Read latency Write latency

Appendix A. Performance data and statistics gathering

887

7933 11 APPENDIX A Performance AB-Mark.fm

Draft Document for Review January 17, 2012 6:10 am

Figure A-3 Performance monitoring window

You can also select to view performance statistics for each of the available nodes of the system as shown in Figure A-4.

Figure A-4 Select system node

It is also possible to change the metric between MBps or I/Os per second (Figure A-5).

Figure A-5

On any of these views you are able to select with your cursor any point in time to know the exact value and when it occurred. As soon as you place your cursor over the timeline it becomes a dotted line with the different values gathered. This is shown in Figure A-6.

888

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 11 APPENDIX A Performance AB-Mark.fm

Figure A-6 Detailed Resource utilization

For each of the resources, there are different values that you can view selecting the check box next to it. For example, for the MDisks view we show in Figure A-7, the four available fields are selected: Read, Write, Read latency and Write latency.

Figure A-7 Detailed Resource utilization

Performance data collection and Tivoli Storage Productivity Center for Disk
Although you can obtain performance statistics in standard .xml files, that is not the most practical or user-friendly way to analyze SVC performance statistics. The Tivoli Storage Productivity Center (TPC) for Disk is the official and supported IBM tool used to collect and analyze SVC performance statistics. Tivoli Storage Productivity Center for Disk comes preinstalled on the System Storage Productivity Center Console and can be made available by activating the license. For more information about using Tivoli Storage Productivity Center to monitor your storage subsystem, see Monitoring Your Storage Subsystems with TotalStorage Productivity Center, SG24-7364, which is available at the following website: http://www.redbooks.ibm.com/abstracts/sg247364.html?Open Note: Tivoli Storage Productivity Center for Disk for TPC Version 4.2.1 supports new SVC port quality statistics provided in SVC Versions 4.3 and above. Monitoring these metrics in addition to the performance metrics can help you to maintain a stable SAN environment.

Appendix A. Performance data and statistics gathering

889

7933 11 APPENDIX A Performance AB-Mark.fm

Draft Document for Review January 17, 2012 6:10 am

890

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 12 APPENDIX B GLOSSARY Christian.fm

Appendix B.

Terminology
In this appendix we define terms commonly used within this book that relate to the SVC and its concepts. To see the complete set of terms that are related to the SVC, refer to the Glossary section of the SVC Information Center. It is available at the following website: http://publib.boulder.ibm.com/infocenter/svc/ic/index.jsp

Copyright IBM Corp. 2011. All rights reserved.

891

7933 12 APPENDIX B GLOSSARY Christian.fm

Draft Document for Review January 17, 2012 6:10 am

Commonly encountered terms


This book includes the following SVC-related terminology.

Auto Data Placement Mode


Auto Data Placement Mode is an Easy Tier operating mode in which the host activity on all the volume extents in a pool are measured, a migration plan is created, and then automatic extent migration is performed.

back-end
See front-end and back-end.

channel extender
A channel extender is a device used for long distance communication connecting other SAN fabric components. Generally, channel extenders can involve protocol conversion to asynchronous transfer mode (ATM), Internet Protocol (IP), or another long distance communication protocol.

clustered system (SVC)


A clustered system, formerly known as cluster, is a group of up to eight SVC nodes that presents a single configuration, management and service interface to the user.

cold extent
A cold extent is a volumes extent that will not get any performance benefit if moved from HDD to SSD. A cold extent also refers to an extent that needs to be migrated onto HDD if it currently resides on SSD.

Consistency Group
A Consistency Group is a group of copy relationships between virtual volumes or data sets that are maintained with the same time reference so that all copies are consistent in time. A Consistency Group can be managed as a single entity.

copied
Copied is a FlashCopy state that indicates that a copy has been triggered after the copy relationship was created. The copied state indicates that the copy process is complete, and the target disk has no further dependence on the source disk. The time of the last trigger event is normally displayed with this status.

configuration node
While the cluster is operational, a single node in the cluster is appointed to provide configuration and service functions over the network interface. This node is termed the configuration node. This configuration node manages the information that describes the cluster configuration, and it provides a focal point for configuration commands. If the configuration node fails, another node in the cluster will transparently assume that role.

892

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 12 APPENDIX B GLOSSARY Christian.fm

counterpart SAN
A counterpart SAN is a non-redundant portion of a redundant SAN. A counterpart SAN provides all of the connectivity of the redundant SAN, but without the 100% redundancy. SVC nodes are typically connected to a redundant SAN made up of two counterpart SANs. A counterpart SAN is often called a SAN fabric.

disk tier
It is likely that the MDisks (LUNs) presented to the SVC cluster will have different performance attributes due to the type of disk or RAID array that they reside on. The MDisks may be on 15 K RPM Fibre Channel or SAS disk, Nearline SAS or SATA, or even solid state disk (SSDs). Thus, a storage tier attribute is assigned to each MDisk, the default being generic_hdd. SVC 6.1 introduced a new disk tier attribute for SSDs known as generic_ssd.

Directed Maintenance Procedures


The fix procedures, also known as Directed Maintenance Procedures (DMPs), ensure that you fix any outstanding errors in the error log. To do so, from the Monitoring panel click Events. The Next Recommended Action is displayed at the top of the events screen. Select Run This Fix Procedure and follow the instructions.

Easy Tier
Easy Tier is a volume performance function within the SVC that provides automatic data placement of a volumes extents in a multitiered storage pool. The pool will normally contain a mix of SSDs and HDDs. Easy Tier measures host I/O activity on the volumes extents and will migrate hot extents onto the SSDs to ensure maximum performance.

evaluation mode
The evaluation mode is an Easy Tier operating mode in which the host activity on all the volume extents in a pool are measured only. No automatic extent migration is performed.

event (error)
An event is an occurrence of significance to a task or system. Events can include completion or failure of an operation, a user action, or the change in state of a process. Prior to SVC V6.1, this was known as an error.

event code
An event code is a value used to identify an event condition to a user. This value might map to one or more event IDs or to values that are presented on the service panel. This value is used to report error conditions to IBM and to provide an entry point into the service guide.

event ID
An event ID is a value that is used to identify a unique error condition detected by the SVC. An event ID is used internally in the cluster to identify the error.

excluded
The excluded condition is a status condition that describes an MDisk that the SVC has decided to be longer sufficiently reliable to be managed by the cluster. The user must issue a command to include the MDisk in the cluster-managed storage.

extent
An extent is a fixed size unit of data that is used to manage the mapping of data between MDisks and Volumes. The size of the extent can range from 16 MB to 8 GB in size.

Appendix B. Terminology

893

7933 12 APPENDIX B GLOSSARY Christian.fm

Draft Document for Review January 17, 2012 6:10 am

FC port logins
FC port logins refers to the number of hosts that can see any one SVC node port. The SVC has a maximum limit per node port of Fibre Channel logins allowed.

front-end and back-end


The SVC takes MDisks to create pools of capacity from which volumes are created and presented to application servers (hosts). The MDisks reside in the controllers at the back-end of the SVC, SVC to back-end controller zones. The volumes are presented to the hosts reside in the front-end of the SVC, SVC to host zones.

field replaceable units


Field replaceable units (FRUs) are individual parts that are replaced in entirety when any one of the units components fails. They are held as spares by the IBM service organization.

grain
A grain is the unit of data that is represented by a single bit in a FlashCopy bitmap (64 KB/256 KB) in the SVC. It is also the unit to extend the real size of a thin provisioned volume (32, 64, 128, or 256 KB).

host bus adapter (HBA)


A host bus adapter (HBA) is an interface card that connects a server to the SAN environment via its internal bus system, for example PCI Express.

host ID
A hod ID is a numeric identifier assigned to a group of host FC ports or iSCSI host names for the purposes of LUN mapping. For each host ID, there is a separate mapping of SCSI IDs to volumes. The intent is to have a one-to-one relationship between hosts and host IDs, although this relationship cannot be policed.

host mapping
Host mapping refers to the process of controlling which hosts have access to specific volumes within a cluster (it is equivalent to LUN masking). Prior to SVC V6.1, this was known as VDisk-to-Host mapping.

hot extent
A hot extent is a frequently accessed volume extent that gets a performance benefit if moved from HDD onto SSD.

internal storage
Internal storage refers to an array of managed disks (MDisks) and drives that are held in enclosures and in nodes that are part of the SVC cluster.

IQN (iSCSI qualified name)


IQN refers to special names that identify both iSCSI initiators and targets. One of the three name formats that iSCSI provides is IQN. The format is iqn.yyyy-mm.{reversed domain name}; for example, the default for an SVC node is: iqn.1986-03.com.ibm:2145.<clustername>.<nodename>

894

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 12 APPENDIX B GLOSSARY Christian.fm

iSNS (Internet storage name service)


iSNS refers to the Internet storage name service (iSNS) protocol that is used by a host system to manage iSCSI targets and automated iSCSI discovery, management, and configuration of iSCSI and FC devices. It has been defined in RFC 4171.

image mode
The image mode is an access mode that establishes a one-to-one mapping of extents in an existing LUN or (image mode) MDisk with the extents in a volume.

I/O group
Each pair of SVC cluster nodes is known as an input/output (I/O) group. An I/O group has a set of volumes associated with it that are presented to host systems. Each SVC node is associated with exactly one I/O group. The nodes in an I/O group provide a failover, failback function for each other.

ISL hop
An inter-switch link (ISL) is a connection between two switches and is counted as one ISL hop. The number of hops is always counted on the shortest route between two N-ports (device connections). In an SVC environment, the number of ISL hops is counted on the shortest route between the pair of nodes farthest apart. The SVC recommends maximum hops for some fabric paths.

local fabric
The local fabric is composed of SAN components (switches, cables, and so on) that connect the components (nodes, hosts, and switches) of the local cluster together.

local and remote fabric interconnect


The local fabric interconnect and the remote fabric interconnect are the SAN components that are used to connect the local and remote fabrics. Usually depending on the distance between the two fabrics, they can be single-mode optical fibers that are driven by LW gigabit interface converters (GBICs) or SFPs, or more sophisticated components, such as channel extenders or special SFP modules that are used to extend the distance between SAN components.

LU and LUN
LUN is formally defined by the SCSI standards as a logical unit number. It is used as an abbreviation for an entity, which exhibits disk-like behavior, for example, a volume or an MDisk.

Managed disk (MDisk)


An MDisk is a SCSI disk that is presented by a RAID controller and that is managed by the SVC. The MDisk is not visible to host systems on the SAN.

Managed Disk Group (storage pool)


See storage pool.

Master Console (MC)


The Master Console is an SVC term used to refer to the System Storage Productivity Center server that runs optional software, used to assist in the management of the SVC.

Appendix B. Terminology

895

7933 12 APPENDIX B GLOSSARY Christian.fm

Draft Document for Review January 17, 2012 6:10 am

mirrored volume
A mirrored volume is a single virtual volume that has two physical volume copies. The primary physical copy is known within the SVC as copy 0, and the secondary copy is known within the SVC as copy 1.

node
An SVC node is a hardware entity that provides virtualization, cache, and copy services for the cluster. SVC nodes are deployed in pairs called I/O groups. One node in a clustered system is designated the configuration node.

oversubscription
The term oversubscription refers to the ratio of the sum of the traffic on the initiator N-port connections to the traffic on the most heavily loaded ISLs, where more than one connection is used between these switches. Oversubscription assumes a symmetrical network, and a specific workload applied equally from all initiators and sent equally to all targets. A symmetrical network means that all the initiators are connected at the same level, and all the controllers are connected at the same level.

preparing phase
Before you start the FlashCopy process, you must prepare a FlashCopy mapping. The preparing phase is used to flush a volumes data from cache in preparation for FlashCopy operation.

RAS
RAS stands for reliability, availability, and serviceability.

RAID
RAID stands for a redundant array of independent disks, with two or more physical disk drives combinded in an array in a certain way, incorporating a RAID level for either failure protection and/or better performance. The most common RAID levels are 0, 1, 5, 6, 10.

RAID 0
RAID 0 is a data striping technique used across an array, no data protection is provided.

RAID 1
RAID 1 is a mirroring technique used on a storage array in which two or more identical copies of data are maintained on separate mirrored disks.

RAID 10
RAID 10 is a combination of a RAID 0 stripe which is mirrored, RAID 1. Thus, two identical copies of striped data exist; there is no parity.

RAID 5
RAID 5 is an array that has a data stripe which includes a single logical parity drive. The parity check data is distributed across all the array's disks.

RAID 6
RAID 6 is a RAID level that has two logical parity drives per stripe, calculated with different algorithms and therefore can continue to process read and write requests to all of the array's virtual disks in the presence of two concurrent disk failures.

896

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 12 APPENDIX B GLOSSARY Christian.fm

redundant SAN
A redundant SAN is a SAN configuration in which there is no single point of failure (SPoF), so no matter what component fails, data traffic will continue. Connectivity between the devices within the SAN is maintained, although possibly with degraded performance, when an error has occurred. A redundant SAN design is normally achieved by splitting the SAN into two independent counterpart SANs (two SAN fabrics), so that if one path of the counterpart SAN is destroyed, the other counterpart SAN path keeps functioning.

remote fabric
The remote fabric is composed of SAN components (switches, cables, and so on) that connect the components (nodes, hosts, and switches) of the remote cluster together. There can be significant distances between the components in the local cluster and those components in the remote cluster.

SAN
SAN stands for storage area network.

SAN Volume Controller (SVC)


The IBM System Storage SAN Volume Controller is an appliance designed for attachment to a variety of host computer systems, which carries out block-level virtualization of disk storage.

SCSI
SCSI stands for Small Computer Systems Interface.

Service Location Protocol


The Service Location Protocol (SLP) is an Internet service discovery protocol that allows computers and other devices to find services in a local area network without prior configuration. It has been defined in RFC 2608.

Solid State Disk


Solid State Disk (SSD) is a disk made from solid state memory and thus has no moving parts. Most SSDs use NAND-based flash memory technology. It is defined to the SVC as a disk tier generic_ssd.

storage pool (Managed Disk group)


A storage pool is a collection of storage capacity, made up of MDisks, which provides the pool of storage capacity for a specific set of volumes. A storage pool can contain more than one tier of disk, known as a multitier storage pool, which is a prerequisite of Easy Tier automatic data placement. Prior to SVC V6.1, this was known as a Managed Disk Group (MDG).

System Storage Productivity Center


IBM System Storage Productivity Center (SSPC) is a hardware server on which many software products are preinstalled. The required storage management products are activated or enabled through licenses. The SSPC can be used to manage the SVC and DS8000 products.

thin provisioning (thin provisioned volume)


Thin provisioning refers to the ability to define storage, usually a storage pool or volume, with a logical capacity size that is larger than the actual physical capacity assigned to that pool or volume. Thus, a thin provisioned volume is a volume with a virtual capacity that is different from its real capacity. Prior to SVC V6.1 this was known as space efficient.

Appendix B. Terminology

897

7933 12 APPENDIX B GLOSSARY Christian.fm

Draft Document for Review January 17, 2012 6:10 am

volume
A volume is an SVC logical device that appears to host systems attached to the SAN as a SCSI disk. Each volume is associated with exactly one I/O Group. It will have a preferred node within the I/O group. Prior to SVC 6.1, this was known as a VDisk or virtual disk.

volume copy
A volume copy is a physical copy of the data that is stored on a volume. Mirrored volumes have two such copies. Non-mirrored volumes have one copy.

898

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 13 Split IO Group MASSIMO.fm

Appendix C.

SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines

Copyright IBM Corp. 2011. All rights reserved.

899

7933 13 Split IO Group MASSIMO.fm

Draft Document for Review January 17, 2012 6:10 am

Introduction
In this chapter for Split I/O Groups we will discuss the available options, configuration, restrictions, and limitations. We will also focus on the diagnostics procedure in order to be able to understand what may be happening in your Split I/O Group environment after a critical event, and to put you in the best position to take the correct decisions. This could mean: Waiting until the failure in one of the two sites is fixed or Declaring a disaster and starting the recovery action that will be described later in this topic

900

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 13 Split IO Group MASSIMO.fm

Split I/O Group overview


Starting with SVC 5.1 GA support for SVC node distribution across two independent locations up to 10km apart was introduced. With SAN Volume Controller (SVC) 6.3 we offer significant enhancements for a Split I/O Group in two different configurations: No ISL configuration where: Passive WDM devices can be used between both sites. No ISLs between SVC nodes (similar to the SVC 5.1 supported configuration) Distance extension to up to 40km or ISL configuration where: ISLs between SVC nodes (not allowed with releases < 6.3) Maximum distance similar to Metro Mirror (MM) distances Physical requirements similar to MM requirements ISL distance extension with active and passive WDM devices. This chapter explores the characteristics of both.

No ISL Configuration
In the No ISL configuration each SVC I/O Group consists of two independent SVC nodes. In contrast to a standard SVC environment, nodes from the same I/O Group are not placed close together they are distributed across two sites. If a node fails, the other node in the same I/O Group takes over the workload this is standard in an SVC environment. Volume Mirroring provides a consistent data copy in both sites. If one storage subsystem fails, the remaining subsystem processes the I/O requests. The combination of SVC node distribution in two independent data centers and a copy of data in two independent data centers creates a new level of availability. All SVC nodes and the storage system in a single site may fail; the other SVC nodes will take over the server load using the remaining storage subsystems. The Volume ID, the Volume behavior, and the Volume assignment to the server are still the same. No server reboot, no failover scripts, and thus no script maintenance is required. However, you have to consider that a Split I/O Group typically requires special setup and might exhibit substantially reduced performance. In a Split I/O Group environment, the SVC nodes from the same I/O Group reside in two different sites. A third quorum location is required for handling split brain scenarios. Figure C-1 on page 902 shows an example of a No ISL Split I/O Group configuration as it is currently supported in SVC V5.1.

Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines

901

7933 13 Split IO Group MASSIMO.fm

Draft Document for Review January 17, 2012 6:10 am

Figure C-1 Standard SVC 5.1 environment using Volume Mirroring

The Split I/O Group uses the SVC Volume mirroring functionality. Volume Mirroring allows creation of one volume with two copies of MDisk extents; there are not two volumes with the same data on them. The two data copies may be in different MDisk Groups. Thus, Volume Mirroring can minimize the impact to volume availability if one set of MDisks goes offline. The resynchronization between both copies after recovering from a failure is incremental; SVC starts the resynchronization process automatically. Like a standard volume, each mirrored volume is owned by one I/O Group with a preferred node. Thus the mirrored volume will go offline if the whole I/O Group goes offline. The preferred node performs all IO operations, which means reads as well as writes. The preferred node can be set manually. The quorum disk keeps the status of the mirrored volume. The last status (in sync or not in sync) and the definition of Primary and Secondary Volume copy are saved there. Thus, an active quorum disk is required for Volume Mirroring. To ensure data consistency, SVC disables all mirrored volumes if there is no access to any quorum disk candidate any longer. Therefore, quorum disk placement is a very important point with Volume Mirroring and Split I/O Group configuration. Best practices: Drive read I/O to the local storage system. For distances less than 10 km drive the read IO to the faster of two disk subsystems if they are not identical. Take long distance links into account. Preferred node should stay at the same site as the server accessing the volume. The Volume Mirroring primary copy should stay at the same site as the server accessing the volume in order to avoid any potential latency impact where the longer distance solution will be implemented. 902

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 13 Split IO Group MASSIMO.fm

In many cases there is no independent third site available. It is possible to use an already existing building or computer room from the two main sites to create a third, independent failure domain. There are several things to consider: The third failure domain needs an independent power supply (or UPS). If the hosting site fails, the third failure domain should continue to operate. A separate storage controller for the active SVC quorum disk is required, otherwise the SVC loses multiple quorum disk candidates at the same time if a single storage subsystem fails. Each site (failure domain) should be placed in a different location in case of fire. Fibre Channel cabling should not go through another site (failure domain). Otherwise a fire in one failure domain could destroy the links (and break access) to the SVC quorum disk. As shown in Figure C-1 on page 902 the setup is similar to a standard SVC environment, but the nodes are distributed to two sites. SVC nodes and data are equally distributed across two separate sites with independent power sources, named as separate failure domains (Failure Domain 1 and 2). The quorum disk is located in a third site with a separate power source (Failure Domain 3). For each I/O Group four dedicated fiber-optic links between site 1 and site 2 are required. Where the No ISL configuration will be implemented over 10 km distance, passive WDM devices (without power) can be used to pool multiple fiber-optic links with different wavelengths in one or two connections between both sites. SFPs with different wave length (colored SFPs, ie. SFPs used in CWDM devices) are required here. The maximum distance between both major sites is limited to 40 km. Because we have to prevent the risk of burst traffic (due to lack of buffer to buffer credits), the link speed must be limited, depending on the cable length between nodes in the same I/O Group as shown in Table C-1.
Table C-1 svc code level length and speed

SVC code level >= SVC 5.1 >= SVC 6.3 >= SVC 6.3

Minimum length >= 0 km >= 10 km >= 20 km

Maximum length = 10 km = 20 km = 40 km

Max. link speed 8 Gbps 4 Gbps 2 Gbps

The Quorum disk at the third site must be Fibre Channel attached, FCIP can be used if the round trip delay time to the third site is always < 80ms, 40 ms each direction. Figure C-2 on page 904 shows a detailed diagram where passive WDM are used to extend the links between site 1 and site 2.

Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines

903

7933 13 Split IO Group MASSIMO.fm

Draft Document for Review January 17, 2012 6:10 am

Figure C-2 Connection with passive WDM

The best performance server in site 1 should access the volumes in site 1 (preferred node and primary copy in site 1). SVC Volume Mirroring copies the data to Storage 1 and Storage 2. A similar setup should be implemented for servers in site 2 with access to the SVC node in site 2. With the configuration shown in Figure C-2 on page 904 several failover cases are covered: Power off FC Switch 1: FC Switch 2 takes over the load and routes I/O to SVC Node 1 and SVC Node 2. Power off SVC Node 1: SVC Node 2 takes over the load and serves the volumes to the server. SVC Node 2 changes the cache mode to write-through to avoid data loss in case SVC node 2 fails as well. Power off Storage 1: SVC waits a short time (15 to 30 seconds), pauses volume copies on Storage 1 and continues I/O operations using the remaining volume copies on Storage 2. Power off Site 1: Server has no access to the local switches any longer causing access loss. Optional: avoid this access loss by using additional fiber-optic links between site 1 and site 2 for server access.

904

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 13 Split IO Group MASSIMO.fm

Of course, the same scenarios are valid for site 2, and similar scenarios apply in a mixed failure environment (for example: failure of Switch 1, SVC Node 2, and Storage 2). No manual failover / failback activities are required as SVC performs the failover / failback operation. Usage of AIX Life Partition Mobility or VMware VMotion can increase the number of use cases significantly. Online system migrations including running virtual machines and applications are possible, which is a perfectly acceptable functionality to handle maintenance operations in an appropriate way.

Advantages:
Business Continuity solution distributed across two independent data centers Configuration similar to a standard SVC clustered system Limited hardware effort: passive WDM devices can be used, but not required

Requirements:
Four independent fiber-optic links for each I/O Group between both data centers required LW SFPs with support over long distance required for direct connection to remote storage area network (SAN) Optional usage of passive WDM devices Passive WDM device: no power required for operation Colored SFPs required to make different wave length available Colored SFPs must be supported by the switch vendor Two independent fiber-optic links between site 1 and site 2 recommended Third site for quorum disk placement required Quorum disk storage system must use Fibre Channel for attachment with similar requirements such as Metro Mirror storage (80ms round trip delay time, 40ms each direction)

Bandwidth reduction:
Buffer credits, also called buffer-to-buffer (BB) credits, are used as a flow control method by Fibre Channel technology and represent the number of frames a port can store. Thus, buffer-to-buffer credits are necessary to have multiple Fibre Channel frames in parallel in flight. An appropriate number of buffer-to-buffer credits are required for optimal performance. The number of buffer credits to achieve maximum performance over a given distance actually depends on the speed of the link. 1 buffer credit = 2 km at 1 Gbit/s 1 buffer credit = 1 km at 2 Gbit/s 1 buffer credit = 0.5 km at 4 Gbit/s 1 buffer credit = 0.25 km at 8 Gbit/s The guidelines above give the minimum numbers. The performance drops if there are not enough buffer credits according to link distance and link speed as shown in Figure C-3.

Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines

905

7933 13 Split IO Group MASSIMO.fm

Draft Document for Review January 17, 2012 6:10 am

FC link speed 1 Gbit/s 2 Gbit/s 4 Gbit/s 8 Gbit/s

B2B credits for 10 km 5 10 20 40

Distance with 8 credits 16 km 8 km 4 km 2 km

Figure C-3 FC link speed B2B and distance

The number of buffer-to-buffer credits provided by an SVC Fibre Channel host bus adapter (HBA) is limited. An HBA of a 2145-CF8 node provides 41 buffer credits which is sufficient for 10 km distance at 8 Gbit/s. The SVC adapters in all earlier models provide only 8 buffer credits which is enough only for 4 km distance with 4 Gbit/s link speed. These numbers are determined by the HBAs hardware and cannot be changed. We suggest to use 2145-CF8 or CG8 nodes for distances longer than 4 km in order to provide enough buffer-to-buffer credits at a reasonable FC speed.

ISL Configuration
Where a longer distance beyond 40 km between Site 1 and Site 2 are required a new configuration needs to be applied. The setup is quite similar to a standard SVC environment, but the nodes are allowed to communicate over long distance using ISL links between both sites using active or passive WDM and a different SAN configuration. The Figure C-4 shows the detailed diagram related to a configuration with active/passive WDM

Figure C-4 Connection with active/passive WDM and ISL

The Split I/O Group configuration shown in Figure C-4 will support distances of up to 300km (same recommendation as for Metro Mirror).

906

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 13 Split IO Group MASSIMO.fm

Technically, SVC will tolerate a round trip delay of up to 80ms between nodes. Cache mirroring traffic rather than Metro Mirror traffic is sent across the inter-site link and data is mirrored to back-end storage using Volume Mirroring Data is written by the preferred node to both the local and remote storage, the SCSI Write protocol results in two round trips. This latency is hidden from the application by the write cache. Split I/O Group is often used to move workload between servers at different sites. VMotion or equivalent can be used to move applications between servers, hence applications no longer necessarily issue I/O requests to the local SVC nodes. SCSI Write commands from hosts to remote SVC nodes results in an additional two round trips worth of latency that is visible to the application. For Split I/O Group configurations in a long distance environment it is advisable to use the local site for host I/O. Some switches and distance extenders use extra buffers and proprietary protocols to eliminate one of the round trips worth of latency for SCSI Write commands These devices are already supported for use with SVC, basically they give no benefit or any impact to inter-node communication, but they do benefit the host to remote SVC I/Os and SVC to remote Storage Controller I/Os.

Requirements
A Split I/O Group with ISL configuration must meet the following requirements: Four independent, extended SAN fabrics are shown in Figure C-4 on page 906. Those fabrics will be named Public SAN1, Public SAN2, Private SAN1, Private SAN2. Each Public or Private SAN could be done with a dedicated FC switch or director or could be just a Virtual SAN in a CISCO or Brocade FC switch or director. Two ports per SVC node attached to private SANs Two ports per SVC node attached to public SANs SVC Volume Mirroring between Site 1 and Site 2 Hosts and storage attached to public SANs 3rd site quorum attached to public SANs Figure C-5 shows the possible configurations with a virtual SAN.

Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines

907

7933 13 Split IO Group MASSIMO.fm

Draft Document for Review January 17, 2012 6:10 am

Figure C-5 ISL configuration with Virtual SAN

Figure C-6 on page 908 shows the possible configurations with a physical SAN.

Figure C-6 ISL configuration with physical SAN

Use a third site to house a quorum disk. Connections to the third site could be via FCIP because of the distance (no FCIP or FC switches mentioned in the above pictures for simplicity). In many cases there is no independent third site available. It is possible to use an already existing building from the two main sites to create a third, independent failure domain, but you have to consider several things:

908

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 13 Split IO Group MASSIMO.fm

The third failure domain needs an independent power supply (or UPS). If the hosting site failed, the third failure domain should continue to operate. Each site (failure domain) should be placed in different fire compartments. Fibre Channel cabling should not go through another site (failure domain). Otherwise a fire in one failure domain would destroy the links (and break access) to the SVC quorum disk. Applying these considerations, the SVC clustered system is well protected although two failure domains are in the same building. Consider an IBM Advanced Technical Support (ATS) review or processing a RPQ / SCORE to review the proposed configuration. The storage system that provides the quorum disk at the third site must support extended quorum disks. Storage systems that provide extended quorum support are listed at the following Web site: http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003907 Four active/passive WDM, two per each site, to extend Public and Private SAN over distance. Place independent storage systems at the primary and secondary sites, and use volume mirroring to mirror the host data between storage systems at the two sites. SAN Volume Controller nodes that are in the same I/O Group must be located in two different remote sites.

Diagnosis and recovery planning


To achieve the most benefit from the SVC Split I/O Group configuration, post installation planning must include several important steps. These steps ensure that your infrastructure has the chance to be recovered with the same or quite different configuration in one of the surviving sites with a minimal impact for customer applications. Proper planning and configuration backup also helps minimize possible downtime by avoiding changes to the SVC, back-end storage and the SAN. Basically we can differentiate the recovery in the following scenarios: Recover a fully redundant SVC configuration in the surviving site without a Split I/O Group Recover a fully redundant SVC configuration in the surviving site with a Split I/O Group implemented in the same site or with a brand new remote site. Recovery of one of the above scenarios with a failback chance at the original recovered site after the critical event, for example, if you had a failure and do not need to replace any hardware. Independently from which scenario you face, the following practices are valid and should be applied. Planning the SVC requires that you follow these steps: 1. Collect a detailed SVC configuration as follows: a. Running a daily based SVC configuration backup with the SVC CLI commands as shown in Example C-1.

Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines

909

7933 13 Split IO Group MASSIMO.fm

Draft Document for Review January 17, 2012 6:10 am

Example: C-1 Saving SVC configuration IBM_2145:Split_Cluster_1:admin>svcconfig backup ................................................................................. CMMVC6155I SVCCONFIG processing completed successfully IBM_2145:Split_Cluster_1:admin>lsdumps id filename 0 159680.trc.old . . 24 SVC.config.backup.xml_159072

b. Save the produced .xml file in a safe place as shown in Example C-2.
Example: C-2 Copying configuration C:\Program Files\PuTTY>pscp -load SVC_Mainz admin@10.18.229.84:/tmp/SVC.config.backup.xml_159072 c:\temp\clibackup.xml clibackup.xml | 97 kB | 97.2 kB/s | ETA: 00:00:00 | 100%

c. Save the output of the SVC CLI commands in .txt format as shown in Example C-3.
Example: C-3 List of svc commands to be issued and saved lssystem lsnode lsnode node <nodes name> lsnodevpd <nodes name> lsiogrp lsiogrp <iogrps name> lscontroller lscontroller <controllers name> lsmdiskgrp lsmdiskgrp <mdiskgrps name> lsmdisk lsquorum lsquorum <quorum id> lsvdisk lshost lshost <host name> lshostvdiskmap

From the output of the above commands and .xml file we will have a complete picture of the SVC in split I/O Group infrastructure and we will know what was the SVC FC ports WWNN in order to reuse them during the recovery operation described in next topics of this chapter. Example C-4 shows what we need to recreate a Split I/O Group environment after a critical event and we get that from the .xml file.
Example: C-4 xml configuration file <object type="node" > <property name="id" value="1" /> <property name="name" value="node_159072" /> <property name="UPS_serial_number" value="100014P293" /> <property name="WWNN" value="500507680100C109" /> <property name="status" value="online" /> <property name="IO_Group_id" value="0" /> <property name="IO_Group_name" value="io_grp0" /> <property name="partner_node_id" value="2" /> <property name="partner_node_name" value="node_159680" />

910

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 13 Split IO Group MASSIMO.fm

<property <property <property <property <property <property <property <property <property <property <property <property <property <property <property <property />

name="config_node" value="yes" /> name="UPS_unique_id" value="2040000044802243" /> name="port_id" value="500507680140C109" /> name="port_status" value="active" /> name="port_speed" value="8Gb" /> name="port_id" value="500507680130C109" /> name="port_status" value="active" /> name="port_speed" value="8Gb" /> name="port_id" value="500507680110C109" /> name="port_status" value="active" /> name="port_speed" value="8Gb" /> name="port_id" value="500507680120C109" /> name="port_status" value="active" /> name="port_speed" value="8Gb" /> name="hardware" value="CG8" /> name="iscsi_name" value="iqn.1986-03.com.ibm:2145.splitcluster1.node159072"

<property name="iscsi_alias" value="" /> <property name="failover_active" value="no" /> <property name="failover_name" value="node_159680" /> <property name="failover_iscsi_name" value="iqn.1986-03.com.ibm:2145.splitcluster1.node159680" /> <property name="failover_iscsi_alias" value="" /> <property name="panel_name" value="159072" /> <property name="enclosure_id" value="" /> <property name="canister_id" value="" /> <property name="enclosure_serial_number" value="" /> <property name="service_IP_address" value="9.155.114.14" /> <property name="service_gateway" value="9.155.112.1" /> <property name="service_subnet_mask" value="255.255.240.0" /> <property name="service_IP_address_6" value="" /> <property name="service_gateway_6" value="" /> <property name="service_prefix_6" value="" />

Or from the .txt command output as shown in Example C-5.


Example: C-5 lsnode example output command IBM_2145:Split_Cluster_1:admin>lsnode 1 id 1 name node_159072 UPS_serial_number 100014P293 WWNN 500507680100C109 status online IO_Group_id 0 IO_Group_name io_grp0 partner_node_id 2 partner_node_name node_159680 config_node yes UPS_unique_id 2040000044802243 port_id 500507680140C109 port_status active port_speed 8Gb port_id 500507680130C109 port_status active port_speed 8Gb port_id 500507680110C109 port_status active port_speed 8Gb port_id 500507680120C109

Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines

911

7933 13 Split IO Group MASSIMO.fm

Draft Document for Review January 17, 2012 6:10 am

port_status active port_speed 8Gb hardware CG8 iscsi_name iqn.1986-03.com.ibm:2145.splitcluster1.node159072 iscsi_alias failover_active no failover_name node_159680 failover_iscsi_name iqn.1986-03.com.ibm:2145.splitcluster1.node159680 failover_iscsi_alias panel_name 159072 enclosure_id canister_id enclosure_serial_number service_IP_address 9.155.114.14 service_gateway 9.155.112.1 service_subnet_mask 255.255.240.0 service_IP_address_6 service_gateway_6 service_prefix_6

Note: For further and detailed information about how to backup your configuration consult: https://www-304.ibm.com/support/docview.wss?uid=ssg1S1002175 http://publib.boulder.ibm.com/infocenter/svc/ic/index.jsp http://www.redbooks.ibm.com/abstracts/sg247933.html?Open 2. It is strongly recommended to have an up to date copy of the diagram of your environment where all connections (at high level or in detail) are described. 3. It is recommended to have a standard labeling schema and naming convention for your FC or Ethernet cabling and have it fully documented. 4. Backup your SAN zoning. The zoning backup could be done using your FC switch/director command line interface or GUI. The essential zoning configuration data, domain id, zoning, alias, configuration or zone set can be saved in a .txt file using the output from the Cisco or Brocade CLI commands, or using the appropriate vendor utility to backup the entire configuration. Example C-6 shows what we can save in a .txt file using Brocade CLI commands.
Example: C-6 Zoning example IBM-2498-b40-10:FID128:admin>switchshow switchName: IBM-2498-b40-10 switchType: 66.1 switchState: Online switchMode: Native switchRole: Subordinate switchDomain: 10 switchId: fffc0a switchWwn: 10:00:00:05:33:39:7d:78 zoning: ON (SVC_WDM_test) switchBeacon: OFF FC Router: OFF Allow XISL Use: OFF LS Attributes: [FID: 128, Base Switch: No, Default Switch: Yes, Address Mode 0] Index Port Address Media Speed State Proto

912

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 13 Split IO Group MASSIMO.fm

============================================== 0 0 0a0000 id N4 Online FC F-Port 50:05:07:63:03:30:45:c7 1 1 0a0100 id N4 Online FC F-Port 50:05:07:63:03:38:45:c7 8 8 0a0800 id N8 Online FC F-Port 50:05:07:68:01:10:c4:3f 9 9 0a0900 id N8 No_Light FC . lines ommited for brevity . 23 23 0a1700 id N2 Online FC LE E-Port 10:00:00:05:1e:34:4b:66 "IBM_2005_H16_4" (upstream)(Trunk master) . . lines ommited for brevity . 36 36 0a2400 id N8 No_Light FC IBM-2498-b40-10:FID128:admin> fabricshow Switch ID Worldwide Name Enet IP Addr FC IP Addr Name ------------------------------------------------------------------------2: fffc02 10:00:00:05:1e:34:4b:66 9.155.66.212 0.0.0.0 >"IBM_2005_H16_4" 10: fffc0a 10:00:00:05:33:39:7d:78 9.155.114.11 0.0.0.0 "IBM-2498-b40-10" 32: fffc20 10:00:00:05:33:39:36:49 9.155.114.12 0.0.0.0 "IBM_2498-b40-11" IBM-2498-b40-10:FID128:admin> cfgshow Defined configuration: cfg: SVC_WDM_test ESX_3650_03_DS8K; ESX_3650_03_SVC; MS_3650_05_DS8K; MS_3650_05_SVC; SLES_3650_10; SLES_3650_11; SVC_DR_CL_1; SVC_DR_CL_1_DS8K_S3; SVC_Split_CL_1; SVC_Split_CL_1_DS34_03_CTL_A; SVC_Split_CL_1_DS34_03_CTL_B; SVC_Split_CL_1_DS34_09_CTL_A; SVC_Split_CL_1_DS34_09_CTL_B; SVC_Split_CL_1_DS47_Q_CTL_A; SVC_Split_CL_1_DS47_Q_CTL_B; SVC_Split_CL_1_DS50_CTL_A; SVC_Split_CL_1_DS50_CTL_B; SVC_Split_CL_1_DS8K_S1; SVC_Split_CL_1_DS8K_S2 . lines ommited for brevity . zone: SVC_Split_CL_1 SVC_85_P2; SVC_85_P3; SVC_87_P2; SVC_87_P3 zone: SVC_Split_CL_1_DS34_03_CTL_A DS3400_03_CTL_A_A; SVC_85_P2; SVC_85_P3; SVC_87_P2; SVC_87_P3 zone: SVC_Split_CL_1_DS34_03_CTL_B DS3400_03_CTL_B_B; SVC_85_P2; SVC_85_P3; SVC_87_P2; SVC_87_P3 zone: SVC_Split_CL_1_DS34_09_CTL_A DS3400_09_CTL_A_2; SVC_85_P2; SVC_85_P3; SVC_87_P2; SVC_87_P3 zone: SVC_Split_CL_1_DS34_09_CTL_B DS3400_09_CTL_B_2; SVC_85_P2; SVC_85_P3; SVC_87_P2; SVC_87_P3 . lines ommited for brevity . Effective configuration: cfg: SVC_WDM_test . lines ommited for brevity . zone: SVC_Split_CL_1_DS50_CTL_B 20:15:00:80:e5:18:29:d0 20:25:00:80:e5:18:29:d0 50:05:07:68:01:20:c1:09 50:05:07:68:01:30:c1:09 50:05:07:68:01:20:c4:78

Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines

913

7933 13 Split IO Group MASSIMO.fm

Draft Document for Review January 17, 2012 6:10 am

zone:

zone:

50:05:07:68:01:30:c4:78 SVC_Split_CL_1_DS8K_S1 50:05:07:63:03:23:45:c7 50:05:07:63:03:3b:45:c7 50:05:07:68:01:20:c1:09 50:05:07:68:01:30:c1:09 50:05:07:68:01:20:c4:78 50:05:07:68:01:30:c4:78 SVC_Split_CL_1_DS8K_S2 50:05:07:63:03:30:45:c7 50:05:07:63:03:38:45:c7 50:05:07:68:01:20:c1:09 50:05:07:68:01:30:c1:09 50:05:07:68:01:20:c4:78 50:05:07:68:01:30:c4:78

As a best practice, we suggest that during the implementation to use WWNN zoning and during the recovery phase after a critical event, to reuse for as long as possible the same domain id and same port number used in the failing site. Zoning will be propagated on each switch/director because of SAN extension with ISL. More detail on this will be provided later. For further and detailed information about how to back up your FC switch or director zoning configuration consult: Brocade Fabric OS Administrators Guide for appropriate firmware level Or Cisco MDS 9000 Family Command Reference for appropriate firmware level 5. Backup your backend storage subsystems configuration: In your Split I/O Group implementation you may use different storage subsystems provided by vendors other than IBM. Those storage subsystems should be configured following the SVC best practices in order to be used for Volume Mirroring. It is suggested to backup your storage subsystems configuration in order to be in a position to recreate the same environment in case of a critical event when you will be called to re-establish your Split I/O Group infrastructure in a different site with new storage subsystems. More details will be provided on this in later topics. a. For DS3XXX, DS4XXX, or DS5XXX storage subsystems, save in a safe place a copy of an up to date subsystem profile as shown in Figure C-7 on page 915.

914

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 13 Split IO Group MASSIMO.fm

Figure C-7 DSXYYY back up configuration example

b. For the DS8000 storage subsystem we suggest to save in .txt format the output of the SVC CLI commands as shown in Example C-7.
Example: C-7 DS8000 commands lsarraysite l lsarray l lsrank l lsextpool l lsfbvol l lshostconnect l lsvolgrp l showvolgrp lunmap <SVC vg_name>

c. For XIV storage subsystem we suggest to save in a .txt format the output of the xcli command as shown in Example C-8, consult your XIV specialist for further suggestions.
Example: C-8 XIV commands host_list host_list_ports mapping_list vol_mapping_list pool_list vol_list

d. For any other supported storage vendor, refer to their documentation in order to save a configuration or report where it will be easy to find the SVC MDisk configuration and mapping.

Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines

915

7933 13 Split IO Group MASSIMO.fm

Draft Document for Review January 17, 2012 6:10 am

Diagnosis guidelines
In this section we provide some guidelines regarding how to diagnose a critical event in one of the two sites where the Split I/O Group has been implemented. With these guidelines you will be in a position to understand what is the extent of any damage, what is still running, what can be recovered with which impact on the performance, to the application and service level agreement.

Diagnosis Guidelines for NO ISL configuration


In this section we will give you some guidelines as to how to diagnose a critical event in one of the two sites where the Split I/O Group has been implemented. We are considering a Split I/O Group configuration with No ISL and passive WDM, as shown in Figure C-8 on page 916.

Figure C-8 Connection with No ISL and passive WDM

We are assuming that the configuration has been implemented in a campus solution where the distance between Site 1 and Site 2 is <10 Km.

916

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 13 Split IO Group MASSIMO.fm

Up and running scenario analysis


We are starting with a scenario where everything is up and running and all the best practices have been applied, implementing the solution as shown in the SVC CLI command output in Example C-9.
Example: C-9 Running example IBM_2145:Split_Cluster_1:admin>lssystem id 0000020061618212 name Split_Cluster_1 location local partnership bandwidth total_mdisk_capacity 6.0TB space_in_mdisk_grps 6.0TB space_allocated_to_vdisks 4.11TB total_free_space 1.8TB total_vdiskcopy_capacity 4.66TB total_used_capacity 4.09TB total_overallocation 78 total_vdisk_capacity 2.33TB total_allocated_extent_capacity 4.11TB statistics_status on statistics_frequency 5 cluster_locale en_US time_zone 384 Europe/Paris code_level 6.3.0.0 (build 54.3.1109280000) console_IP 9.155.114.22:443 id_alias 0000020061618212 gm_link_tolerance 300 gm_inter_cluster_delay_simulation 0 gm_intra_cluster_delay_simulation 0 gm_max_host_delay 5 email_reply email_contact email_contact_primary email_contact_alternate email_contact_location email_contact2 email_contact2_primary email_contact2_alternate email_state stopped inventory_mail_interval 0 cluster_ntp_IP_address cluster_isns_IP_address iscsi_auth_method none iscsi_chap_secret auth_service_configured no auth_service_enabled no auth_service_url auth_service_user_name auth_service_pwd_set no auth_service_cert_set no auth_service_type tip relationship_bandwidth_limit 25 tier generic_ssd tier_capacity 0.00MB tier_free_capacity 0.00MB tier generic_hdd tier_capacity 5.96TB

Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines

917

7933 13 Split IO Group MASSIMO.fm

Draft Document for Review January 17, 2012 6:10 am

tier_free_capacity 1.85TB has_nas_key no layer replication rc_buffer_size 48 IBM_2145:Split_Cluster_1:admin> IBM_2145:Split_Cluster_1:admin>lsnode id name UPS_serial_number WWNN status IO_Group_id IO_Group_name config_node UPS_unique_id hardware iscsi_name iscsi_alias panel_name enclosure_id canister_id enclosure_serial_number 1 node_159072 100014P293 500507680100C109 online 0 io_grp0 yes 2040000044802243 CG8 iqn.1986-03.com.ibm:2145.splitcluster1.node159072 159072 2 node_159680 100013I066 500507680100C478 online 0 io_grp0 no 2040000043640186 CG8 iqn.1986-03.com.ibm:2145.splitcluster1.node159680 159680 IBM_2145:Split_Cluster_1:admin>lsiogrp id name node_count vdisk_count 0 io_grp0 2 34 1 io_grp1 0 0 2 io_grp2 0 0 3 io_grp3 0 0 4 recovery_io_grp 0 0

host_count 4 4 4 4 0

IBM_2145:Split_Cluster_1:admin>lscontroller id controller_name ctrl_s/n vendor_id product_id_high 0 DS3400_03 IBM 1 DS3400_09 IBM 2 DS4700 IBM 3 DS8000 75AAFC1FFFF IBM 4 DS5020 IBM

product_id_low 1726-4xx 1726-4xx 1814 2107900 1814 FAStT FAStT FAStT FAStT

IBM_2145:Split_Cluster_1:admin>lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning easy_tier easy_tier_status 0 S3_DS4700_Q online 1 0 99.50GB 256 99.50GB 0.00MB 0.00MB 0.00MB 0 80 auto inactive 1 DS3400_03_11 online 1 4 299.50GB 256 3.50GB 296.00GB 296.00GB 296.00GB 98 80 auto inactive . lines omitted for brevity . 15 DS5020_12 online 1 4 600.00GB 256 304.00GB 296.00GB 296.00GB 296.00GB 49 80 auto inactive 16 DS5020_22 online 1 4 600.00GB 256 304.00GB 296.00GB 296.00GB 296.00GB 49 80 auto inactive IBM_2145:Split_Cluster_1:admin>lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID tier 0 DS3400_09_11 online managed 5 DS3400_09_11 300.0GB 0000000000000000 DS3400_09 600a0b8000369a8100000ac44e4dd8cc00000000000000000000000000000000 generic_hdd 1 DS3400_09_12 online managed 6 DS3400_09_12 300.0GB 0000000000000001 DS3400_09 600a0b80003743e800000e184e4ddbbb00000000000000000000000000000000 generic_hdd . lines omitted for brevity

918

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 13 Split IO Group MASSIMO.fm

. 15 mdisk2 DS5020 generic_hdd 16 mdisk3 DS5020 generic_hdd

online managed 15 DS5020_12 600.0GB 0000000000000002 60080e5000182ec60000b0814e560c1e00000000000000000000000000000000 online managed 16 DS5020_22 600.0GB 0000000000000003 60080e5000182ec60000b0864e560c5800000000000000000000000000000000

IBM_2145:Split_Cluster_1:admin>lsvdisk id name IO_Group_id IO_Group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state se_copy_count RC_change 0 SLES_3650_10_01 0 io_grp0 online many many 100.00GB many 60050768018586084800000000000000 0 2 empty 0 no 1 SLES_3650_11_01 0 io_grp0 online many many 100.00GB many 60050768018586084800000000000001 0 2 empty 0 . lines omitted for brevity . 32 test2 0 io_grp0 online many many 10.00GB many 60050768018586084800000000000020 0 2 empty 2 no 33 test3 0 io_grp0 online many many 10.00GB many 60050768018586084800000000000021 0 2 empty 2 no IBM_2145:Split_Cluster_1:admin>lsquorum quorum_index status id name controller_id override 0 online 8 DS4700_SVC_Q1 2 1 online 0 DS3400_09_11 1 2 online 4 DS3400_03_11 0

controller_name active object_type DS4700 DS3400_09 DS3400_03 yes no no mdisk mdisk mdisk yes yes yes

From the SVC CLI command output shown in Example C-9 on page 917 you can see that: The SVC clustered system is accessible through the CLI The SVC nodes are online and one of them is the config node The I/O Groups are in the correct state The subsystem storage controllers are connected The Managed Disk Groups are online The MDisks are online The volumes are online The 3 Quorum disks are in the correct state Now we can check the Volume Mirroring status by running a single SVC CLI command against each single volume as shown in Example C-10.
Example: C-10 Volume mirroring status IBM_2145:Split_Cluster_1:admin>lsvdisk SLES_3650_10_01 id 0 name SLES_3650_10_01 IO_Group_id 0 IO_Group_name io_grp0 status online mdisk_grp_id many mdisk_grp_name many capacity 100.00GB

Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines

919

7933 13 Split IO Group MASSIMO.fm

Draft Document for Review January 17, 2012 6:10 am

type many formatted no mdisk_id many mdisk_name many . lines omitted for brevity . copy_id 0 status online sync yes primary yes mdisk_grp_id 1 mdisk_grp_name DS3400_03_11 . lines omitted for brevity . copy_id 1 status online sync yes primary no mdisk_grp_id 13 mdisk_grp_name DS5020_11 . lines omitted for brevity . tier_capacity 100.00GB

From the SVC CLI command output in the Example C-10 on page 919 you can see that: The volume is online The storage pool name and the MDisk name is many, that means Volume Mirroring is in place The Copy id 0 is online, in sync and it is the primary The Copy id 1 is online, in sync and it is the secondary If you have several volumes to check you could create a customized script directly from the SVC shell.

Critical event scenario analysis


We will look at our SVC environment following a critical event that has caused the complete loss of one of the sites. In our case, Site 1. The following steps have to be followed as a guideline or check-list in order to gain a complete view of any damage and to gather enough decision elements to determine what your next recovery actions will be. 1. Is SVC system management available through GUI or CLI? a. Is SVC system login possible? YES: SVC system is online, continue with step 2 NO: SVC system may be offline or suffering connection problems. i. Check your connections, cabling, node front panel event messages. ii. Verify the SVC system status using Service Assistant Menu or node front panel. For detailed informations refer to IBM System Storage SAN Volume Controller Troubleshooting Guide, GC27-2284. iii. Bring a part of the SVC system online for further diagnostics.

920

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 13 Split IO Group MASSIMO.fm

Figure C-9 on page 921 and Figure C-10 on page 921 are an example of the Service Assistant Menu in an SVC system with one node failing.

Figure C-9 Service Assistant Menu login

iv. Using a browser connect to one of your SVC nodes service IP address:
https://<ser vice_ip_add>/service/

Login with your SVC system GUI password, and after the login you will be redirected to the Service Assistant Menu as shown in Figure C-10.

Figure C-10 Service Assistant Menu

Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines

921

7933 13 Split IO Group MASSIMO.fm

Draft Document for Review January 17, 2012 6:10 am

From the Service Assistant Menu you will have the chance to bring at least a part of the SVC Clustered system online for further diagnostics. For further and detailed information about Service Assistant Menu refer to: Implementing the IBM System Storage SAN Volume Controller V6.3 SG24-7933 2. If the SVC system management is available: a. Check the status using the SVC CLI by running the commands shown in Example C-11.
Example: C-11 lssystem example IBM_2145:Split_Cluster_1:admin>lssystem id 0000020061618212 name Split_Cluster_1 location local partnership bandwidth total_mdisk_capacity 6.0TB space_in_mdisk_grps 6.0TB space_allocated_to_vdisks 4.63TB total_free_space 1.3TB . lines omitted for brevity . layer replication rc_buffer_size 48

As you can see the SVC clustered system looks accessible, and from the GUI as shown in Figure C-11.

Figure C-11 GUI example

b. Check the status of the nodes as shown in Example C-12 on page 923.

922

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 13 Split IO Group MASSIMO.fm

Example: C-12 Node status example IBM_2145:Split_Cluster_1:admin>lsnode id name UPS_serial_number WWNN status IO_Group_id IO_Group_name config_node UPS_unique_id hardware iscsi_name iscsi_alias panel_name enclosure_id canister_id enclosure_serial_number 1 node_159072 100014P293 500507680100C109 offline 0 io_grp0 no 2040000044802243 CG8 iqn.1986-03.com.ibm:2145.splitcluster1.node159 072 159072 2 node_159680 100013I066 500507680100C478 online 0 io_grp0 yes 2040000043640186 CG8 iqn.1986-03.com.ibm:2145.splitcluster1.node159 680 159680 IBM_2145:Split_Cluster_1:admin>lsnode node_159072 id 1 name node_159072 UPS_serial_number 100014P293 WWNN 500507680100C109 status offline IO_Group_id 0 IO_Group_name io_grp0 partner_node_id 2 partner_node_name node_159680 config_node no UPS_unique_id 2040000044802243 port_id 500507680140C109 port_status inactive port_speed 8Gb port_id 500507680130C109 port_status inactive port_speed 8Gb port_id 500507680110C109 port_status inactive port_speed 8Gb port_id 500507680120C109 port_status inactive port_speed 8Gb . lines omitted for brevity . service_prefix_6 IBM_2145:Split_Cluster_1:admin>lsnode node_159680 id 2 name node_159680 UPS_serial_number 100013I066 WWNN 500507680100C478 status online IO_Group_id 0 IO_Group_name io_grp0 partner_node_id 1 partner_node_name node_159072 config_node yes UPS_unique_id 2040000043640186 port_id 500507680140C478 port_status active port_speed 8Gb port_id 500507680130C478 port_status active port_speed 8Gb port_id 500507680110C478 Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines

923

7933 13 Split IO Group MASSIMO.fm

Draft Document for Review January 17, 2012 6:10 am

port_status active port_speed 8Gb port_id 500507680120C478 port_status active port_speed 8Gb . lines omitted for brevity . service_prefix_6

As you can see from the Example C-12 on page 923: The config node role has moved from node 1 to node 2 Node 1 is offline FC ports in node 1 are inactive Node 2 is online FC ports in node 2 are still online This means that in this event we have lost 50% of the SVC clustered system resources, but the system is still up and running. Using the GUI we can see the same information as shown in Figure C-12.

Figure C-12 Node status with GUI

And from the events screen in the GUI we can get evidence of many and detailed events related to the power loss as shown in Figure C-13 on page 925.

924

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 13 Split IO Group MASSIMO.fm

Figure C-13 Event example

c. Check the I/O group status as shown in Example C-13.


Example: C-13 I/O group status IBM_2145:Split_Cluster_1:admin>lsiogrp id name node_count vdisk_count 0 io_grp0 2 34 1 io_grp1 0 0 2 io_grp2 0 0 3 io_grp3 0 0 4 recovery_io_grp 0 0 IBM_2145:Split_Cluster_1:admin> IBM_2145:Split_Cluster_1:admin>lsiogrp 0 id 0 name io_grp0 node_count 2 vdisk_count 34 host_count 4 flash_copy_total_memory 20.0MB flash_copy_free_memory 20.0MB remote_copy_total_memory 20.0MB remote_copy_free_memory 20.0MB mirroring_total_memory 20.0MB mirroring_free_memory 18.7MB raid_total_memory 40.0MB raid_free_memory 40.0MB maintenance no host_count 4 4 4 4 0

As you can see the I/O Group still reports 2 nodes per I/O Group. d. Check the quorum status as shown in Example C-14.
Example: C-14 Quorum status IBM_2145:Split_Cluster_1:admin>lsquorum Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines

925

7933 13 Split IO Group MASSIMO.fm

Draft Document for Review January 17, 2012 6:10 am

quorum_index override 0 1 ignored 2

status id name online 8 online 5 online 4

controller_id controller_name active object_type DS4700 DS3400_03 DS3400_03 yes no no mdisk mdisk mdisk yes

DS4700_SVC_Q1 2 DS3400_03_12 0 DS3400_03_11 0

yes

As you can see the active quorum disk is still active because it was not impacted by the critical event, but the quorum index 1 that is in the site that suffered the power failure has flagged it with override ignored. This is because if the original resource located in DS3400_09 went offline and another resource is used instead, then the override field in lsquorum shows as ignored. From the GUI it is not as easy to find the status of the quorum disks. You would have to go through all the MDisks and check in detail which is the active and defined quorum disk as shown in Figure C-14.

Figure C-14 Quorum status with GUI

e. Check the controllers status as shown in Example C-15.


Example: C-15 Controller status IBM_2145:Split_Cluster_1:admin>lscontroller id controller_name ctrl_s/n vendor_id product_id_high 0 DS3400_03 IBM 1 DS3400_09 IBM 2 DS4700 IBM FAStT 3 DS8000 75AAFC1FFFF 4 DS5020 IBM IBM_2145:Split_Cluster_1:admin>lscontroller DS3400_03 id 0 controller_name DS3400_03 WWNN 200A00A0B836972A product_id_low 1726-4xx 1726-4xx 1814 IBM 1814 FAStT FAStT

2107900 FAStT

926

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 13 Split IO Group MASSIMO.fm

mdisk_link_count 8 max_mdisk_link_count 8 degraded no vendor_id IBM product_id_low 1726-4xx product_id_high FAStT product_revision 0617 ctrl_s/n allow_quorum yes WWPN 203B00A0B836972A path_count 4 max_path_count 16 WWPN 203A00A0B836972A path_count 2 max_path_count 8 WWPN 202A00A0B836972A path_count 2 max_path_count 8 WWPN 202B00A0B836972A path_count 0 max_path_count 8 IBM_2145:Split_Cluster_1:admin>lscontroller DS3400_09 id 1 controller_name DS3400_09 WWNN 200300A0B8369ACA mdisk_link_count 4 max_mdisk_link_count 4 degraded yes vendor_id IBM product_id_low 1726-4xx product_id_high FAStT product_revision 0617 ctrl_s/n allow_quorum yes WWPN 202300A0B8369ACA path_count 0 max_path_count 4 WWPN 203300A0B8369ACA path_count 0 max_path_count 4 WWPN 202400A0B8369ACA path_count 0 max_path_count 8 WWPN 203400A0B8369ACA path_count 0 max_path_count 4 IBM_2145:Split_Cluster_1:admin>lscontroller DS4700 id 2 controller_name DS4700 WWNN 200400A0B82AB012 mdisk_link_count 1 max_mdisk_link_count 1 degraded yes vendor_id IBM product_id_low 1814 product_id_high FAStT product_revision 0916 ctrl_s/n

Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines

927

7933 13 Split IO Group MASSIMO.fm

Draft Document for Review January 17, 2012 6:10 am

allow_quorum yes WWPN 200400A0B82AB013 path_count 0 max_path_count 2 WWPN 200500A0B82AB014 path_count 0 max_path_count 0 WWPN 200400A0B82AB014 path_count 1 max_path_count 2 WWPN 200500A0B82AB013 path_count 0 max_path_count 0 IBM_2145:Split_Cluster_1:admin>lscontroller DS8000 id 3 controller_name DS8000 WWNN 5005076303FFC5C7 mdisk_link_count 0 max_mdisk_link_count 0 degraded yes vendor_id IBM product_id_low 2107900 product_id_high product_revision 3.44 ctrl_s/n 75AAFC1FFFF allow_quorum yes WWPN 50050763033045C7 path_count 0 max_path_count 0 WWPN 50050763032345C7 path_count 0 max_path_count 0 WWPN 50050763033B45C7 path_count 0 max_path_count 0 WWPN 50050763033845C7 path_count 0 max_path_count 0 IBM_2145:Split_Cluster_1:admin>lscontroller DS5020 id 4 controller_name DS5020 WWNN 20040080E51829D0 mdisk_link_count 4 max_mdisk_link_count 4 degraded yes vendor_id IBM product_id_low 1814 product_id_high FAStT product_revision 1060 ctrl_s/n allow_quorum yes WWPN 20340080E51829D0 path_count 0 max_path_count 0 WWPN 20440080E51829D0 path_count 0 max_path_count 4 WWPN 20350080E51829D0

928

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 13 Split IO Group MASSIMO.fm

path_count 0 max_path_count 4 WWPN 20450080E51829D0 path_count 0 max_path_count 0 WWPN 20140080E51829D0 path_count 0 max_path_count 2 WWPN 20240080E51829D0 path_count 0 max_path_count 2 WWPN 20150080E51829D0 path_count 0 max_path_count 2 WWPN 20250080E51829D0 path_count 0 max_path_count 2

As you can see in the output, some controllers are still accessible from the SVC system and others are no longer accessible because the power loss in site 1 has impacted the SVC node, the storage subsystem, and the FC SAN switches. The same information can be gotten from the GUI as shown in Figure C-15.

Figure C-15 Online controllers

and the offline controller as shown in Figure C-16 on page 930.

Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines

929

7933 13 Split IO Group MASSIMO.fm

Draft Document for Review January 17, 2012 6:10 am

Figure C-16 Offline controller

f. Check the storage pool status as shown in Example C-16 on page 930.
Example: C-16 Storage pool status IBM_2145:Split_Cluster_1:admin>lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning easy_tier easy_tier_status 0 S3_DS4700_Q online 1 0 99.50GB 256 99.50GB 0.00MB 0.00MB 0.00MB 0 80 auto inactive 1 DS3400_03_11 online 1 4 299.50GB 256 3.50GB 296.00GB 296.00GB 296.00GB 98 80 auto inactive 2 DS3400_03_12 online 1 6 299.50GB 256 3.00GB 316.00GB 296.00GB 296.43GB 105 80 auto inactive 3 DS3400_03_21 online 1 4 300.00GB 256 4.00GB 296.00GB 296.00GB 296.00GB 98 80 auto inactive 4 DS3400_03_22 online 1 4 300.00GB 256 4.00GB 296.00GB 296.00GB 296.00GB 98 80 auto inactive 5 DS3400_09_11 offline 1 4 300.00GB 256 4.00GB 296.00GB 296.00GB 296.00GB 98 80 auto inactive 6 DS3400_09_12 offline 1 4 300.00GB 256 4.00GB 296.00GB 296.00GB 296.00GB 98 80 auto inactive 7 DS3400_09_21 offline 1 4 300.00GB 256 4.00GB 296.00GB 296.00GB 296.00GB 98 80 auto inactive 8 DS3400_09_22 offline 1 4 300.00GB 256 4.00GB 296.00GB 296.00GB 296.00GB 98 80 auto inactive 9 DS3400_03_13 online 1 4 300.00GB 256 4.00GB 296.00GB 296.00GB 296.00GB 98 80 auto inactive 10 DS3400_03_14 online 1 4 300.00GB 256 4.00GB 296.00GB 296.00GB 296.00GB 98 80 auto inactive 11 DS3400_03_23 online 1 4 300.00GB 256 4.00GB 296.00GB 296.00GB 296.00GB 98 80 auto inactive 12 DS3400_03_24 online 1 4 300.00GB 256 4.00GB 296.00GB 296.00GB 296.00GB 98 80 auto inactive 13 DS5020_11 offline 1 6 600.00GB 256 303.50GB 316.00GB 296.00GB 296.43GB 52 80 auto inactive 14 DS5020_21 offline 1 4 600.00GB 256 304.00GB 296.00GB 296.00GB 296.00GB 49 80 auto inactive 15 DS5020_12 offline 1 4 600.00GB 256 304.00GB 296.00GB 296.00GB 296.00GB 49 80 auto inactive 16 DS5020_22 offline 1 4 600.00GB 256 304.00GB 296.00GB 296.00GB 296.00GB 49 80 auto inactive

930

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 13 Split IO Group MASSIMO.fm

As you can see from the output, and because of the critical event, some storage pools are offline and others are still online. The storage pools that are offline will be the ones which had space allocated on the storage subsystem which suffered a loss of power. g. Check the MDisk status as shown in Example C-17.
Example: C-17 MDisk status IBM_2145:Split_Cluster_1:admin>lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID tier 0 DS3400_09_11 offline managed 5 DS3400_09_11 300.0GB 0000000000000000 DS3400_09 600a0b8000369a8100000ac44e4dd8cc00000000000000000000000000000000 generic_hdd 1 DS3400_09_12 offline managed 6 DS3400_09_12 300.0GB 0000000000000001 DS3400_09 600a0b80003743e800000e184e4ddbbb00000000000000000000000000000000 generic_hdd . lines omitted for brevity . 10 DS3400_03_23 online managed 11 DS3400_03_23 300.0GB 0000000000000005 DS3400_03 600a0b800036972a000017644e55b23200000000000000000000000000000000 generic_hdd 11 DS3400_03_14 online managed 10 DS3400_03_14 300.0GB 0000000000000006 DS3400_03 600a0b800036a5c0000015224e55b1c100000000000000000000000000000000 generic_hdd 12 DS3400_03_24 online managed 12 DS3400_03_24 300.0GB 0000000000000007 DS3400_03 600a0b800036a5c0000015244e55b20e00000000000000000000000000000000 generic_hdd 13 mdisk0 offline managed 13 DS5020_11 600.0GB 0000000000000000 DS5020 60080e5000182ec60000b07e4e560bfc00000000000000000000000000000000 generic_hdd 14 mdisk1 offline managed 14 DS5020_21 600.0GB 0000000000000001 DS5020 60080e5000182ec60000b0834e560c3b00000000000000000000000000000000 generic_hdd 15 mdisk2 offline managed 15 DS5020_12 600.0GB 0000000000000002 DS5020 60080e5000182ec60000b0814e560c1e00000000000000000000000000000000 generic_hdd 16 mdisk3 offline managed 16 DS5020_22 600.0GB 0000000000000003 DS5020 60080e5000182ec60000b0864e560c5800000000000000000000000000000000 generic_hdd

We can get the same information from the GUI as shown in Figure C-17 on page 932.

Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines

931

7933 13 Split IO Group MASSIMO.fm

Draft Document for Review January 17, 2012 6:10 am

Figure C-17 MDisk status with GUI

h. Check the Volume status as shown in Example C-18.


Example: C-18 Volume status IBM_2145:Split_Cluster_1:admin>lsvdisk id name IO_Group_id IO_Group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state se_copy_count RC_change 0 SLES_3650_10_01 0 io_grp0 degraded many many 100.00GB many 60050768018586084800000000000000 0 2 empty 0 no 1 SLES_3650_11_01 0 io_grp0 degraded many many 100.00GB many 60050768018586084800000000000001 0 2 empty 0 no . lines omitted for brevity . 32 test2 0 io_grp0 degraded many many 10.00GB many 60050768018586084800000000000020 0 2 empty 2 no 33 test3 0 io_grp0 degraded many many 10.00GB many 60050768018586084800000000000021 0 2 empty 2 no

As you can see from the output in Example C-18, although we have lost 50% of the resources due to a loss of power in site 1, the volumes are not offline, but in a degraded state. This is 932
IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 13 Split IO Group MASSIMO.fm

because Volume Mirroring acted to guarantee business continuity, and the volumes will be still accessible from the hosts that are still running in site 2 where power is still present. In this case it might be helpful to use the filtervalue option in the SVC CLI command in order to reduce the number of lines produced and volumes to check as shown in Example C-19.
Example: C-19 Volume status IBM_2145:Split_Cluster_1:admin>lsvdisk -nohdr -filtervalue copy_count=2 0 SLES_3650_10_01 0 io_grp0 degraded many many 100.00GB many 60050768018586084800000000000000 0 2 empty 0 no 1 SLES_3650_11_01 0 io_grp0 degraded many many 100.00GB many 60050768018586084800000000000001 0 2 empty 0 no 31 MS_3650_05_08 0 io_grp0 degraded many many 48.00GB many 6005076801858608480000000000001F 0 2 empty 0 no . lines omitted for brevity . 32 test2 0 io_grp0 degraded many many 10.00GB many 60050768018586084800000000000020 0 2 empty 2 no 33 test3 0 io_grp0 degraded many many 10.00GB many 60050768018586084800000000000021 0 2 empty 2 no

As you can see from the output in Example C-19, one copy of each volume is offline and you can also see which storage pool this is related to. We can get the same information from the GUI as shown in Figure C-18.

Figure C-18 Volume status with GUI

Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines

933

7933 13 Split IO Group MASSIMO.fm

Draft Document for Review January 17, 2012 6:10 am

As you can see from Figure C-18 it is pretty easy to understand which resources are online, which are not, and why we have a degraded status related to each volume. 3. Check path status Check the status of the storage paths from your hosts point of view using your multipathing software commands. For SVC it is recommended to use SDD (Subsystem Device Driver) as multipath software. For further and detailed information about SDD commands refer to: http://www-01.ibm.com/support/docview.wss?rs=540&context=ST52G7&uid=ssg1S700030 3 Or refer to the Multipath Subsystem Device Driver User's Guide GC52-1309-03 You can also verify the SDD vpath device configuration by entering the lsvpcfg or datapath query device command. All the above steps are also valid for a limited failure where we are facing a failure with limited impact in one of the sites. In a case of a limited failure it could be helpful to use the following steps to verify the status of your Split I/O Group infrastructure. 4. Check your SAN using the FC switch or director CLI or Web interface in order to verify any failure 5. Check the FC connection between two sites (passive WDM and links) using the FC switch or director CLI or Web interface in order to verify any failure 6. Check the storage subsystem status using its own management interface in order to verify any failure After going through steps 1 through 6, and you have identified what was the root cause and the impact of the event on your infrastructure, you have all the information to take one of the following strategic decisions: Wait until the failure in one of the two sites is fixed Or Declare a disaster and start with the recovery actions that will be described in the Recovery Guidelines section If you have decided to wait until the failure in one of the two sites is fixed, and when the resources that are impacted become available again, the SVC Split I/O Group will be fully operational and: Automatic Volume Mirroring resynchronization will take place Missing nodes will rejoin the SVC clustered system If the impact of the failure is more serious and you are forced to declare a disaster, you will have to take a more strategic decision discussed later in , Recovery guidelines on page 935.

Diagnosis Guidelines for ISL configuration


All the Diagnosis Guidelines described in , Diagnosis Guidelines for NO ISL configuration on page 916 will also be valid when you have implemented a Split I/O Group configuration with ISL. Figure C-19 on page 935 shows an example of Split I/O Group with ISL.

934

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 13 Split IO Group MASSIMO.fm

Figure C-19 ISL configuration with active/passive WDM

If you have implemented a Split I/O Group configuration with ISL in addition to the checklist steps described in , Diagnosis Guidelines for NO ISL configuration on page 916 you should execute the following verification steps: 1. Check your SAN using the FC switch or director CLI or Web interface in order to verify any partial failure related to a single switch, director or virtual SAN (Public or Private). 2. Check the FC connections between two sites (active WDM and ISL) using the FC switch/director CLI or Web interface in order to verify any partial failure. 3. Check the ISL link status using your active WDM management interface. 4. Check the status of the quorum disk links using your SAN FC switch or director CLI or Web interface. After you have identified the root cause and the impact of the event on your infrastructure, you have all the information to take one of the following decisions: Wait until the failure in one of the two sites is fixed Or Declare a disaster and start with the recovery action described in , Recovery guidelines on page 935. If you decide to wait until the failure in one of the two sites is fixed, and when the resources that are impacted become available, the SVC Split I/O Group will be fully operational and: Automatic Volume Mirroring resynchronization will take place Missing nodes will rejoin the SVC clustered system

Recovery guidelines
In this section we will explore some recovery scenarios. Regardless of the different scenarios, the common starting point will be the complete loss of site 1 or site 2 caused by a severe critical event.
Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines

935

7933 13 Split IO Group MASSIMO.fm

Draft Document for Review January 17, 2012 6:10 am

After an initial analysis phase of the event a strategic decision has to be made: Wait until the lost site is restored Or Start a recovery procedure so that the surviving site configuration will be rebuilt such that it provides the same performance and availability characteristics as it did before the event If the recovery times are too long and the lost site cannot wait for its eventual return to life you will need to take the appropriate recovery actions.

What do you need to supply to recover the Split I/O Group configuration
If you have arrived at this point, it is because you cannot wait for the site to be brought back to life in a reasonable time so you will need to take some recovery actions. The answers to the following questions determine the appropriate recovery action: Where do you want to recover to? In the same site or in a new site? Is it a temporary or permanent recovery? If it is a temporary recovery, do we need to plan a failback scenario? Does the recovery action address performance issues or business continuity issues? It is almost certain we will need additional storage space, additional SVC nodes and additional SAN components: Do we plan to use brand new nodes supplied by IBM? Do we plan to reuse other, existing SVC nodes, that might be being used for non-business-critical applications (test environment) at the moment? Do we plan to use new FC SAN switches or directors? Do we plan to reconfigure FC SAN switches or directors in order to host newly acquired SVC nodes and storage? Do we plan to use new back-end storage subsystems? Do we plan to configure some free space on the surviving storage subsystem(s) in order to host the space required for Volume Mirroring? The answers to these questions will direct the recovery strategy, investment and monetary steps to take, which cannot be improvised but must be part of a recovery plan in order to create a minimal impact to applications and therefore service levels. We will describe in detail what the recovery guidelines will be, assuming that we have already answered the above questions and we have decided to recover a fully redundant configuration in the same surviving site, and supplying new SVC nodes, new storage subsystems, and new FC SAN devices. We will also give some indication about how to reuse SVC nodes that are already available, storage or SAN devices, and guidelines on how to plan a failback scenario. If you do need to recover your Split I/O Group infrastructure, we recommend that you involve IBM Support as early as possible..

936

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 13 Split IO Group MASSIMO.fm

Recovery Guidelines for No ISL configuration


In this section we will describe in detail what the recovery guidelines are assuming that we have decided to recover a fully redundant configuration in the same surviving site by supplying new SVC nodes, new storage subsystems, and new FC SAN devices. This recovery action has been based on a decision to recover the Split I/O Group infrastructure in order to be able to supply the same performance characteristics as before the critical event, but with limited business continuity because the Split I/O Group will be recovered at one site only. We are assuming that we have already received and physically installed a new SVC node, FC switches and backend storage subsystems. Figure C-20 shows the new recovery configuration.

Figure C-20 New recovery configuration in same surviving site

We have decided to recover the configuration exactly as it was, using passive WDM, even if it has been recovered in the same site in order to make it easier in the future to implement this configuration over distance when a new site is provided by simply executing the following major steps: 1. Disconnect the links between passive WDM 2. Uninstall/reinstall all the brand new devices in the brand new site 3. Reconnect the links between passive WDM The following steps have to be executed in order to recover your Split I/O Group configuration as it was before the critical event in the same site, and after you have installed the new devices.

Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines

937

7933 13 Split IO Group MASSIMO.fm

Draft Document for Review January 17, 2012 6:10 am

1. Restore your back-end storage subsystem configuration as it was, starting from your backup take as suggested earlier. LUN masking can be done in advance because the SVC nodes WWNN is already known. 2. Restore your SAN configuration exactly as it was before the critical event. This could be done by just configuring the new switches with the same domain id as before and connecting them to the surviving switches through the passive WDM. In this way the WWPN zoning propagation will automatically propagate to the new switches. 3. Connect, if possible, the new storage subsystems to exactly the same FC switch ports as before the critical event. SVC to storage zoning has to be reconfigured in order to be able to see the new storage subsystems WWNN. Old WWNNs can be removed but care to remove the right ones, because at this time we have just one volume copy active. 4. Do not connect the SVC node FC just yet. Wait until directed to do so by the SVC nodes WWNN change procedure. 5. Remove the offline node from the SVC system configuration with the SVC CLI commands as shown in Example C-20.
Example: C-20 Remove node command IBM_2145:Split_Cluster_1:admin>lsnode id name UPS_serial_number WWNN status IO_group_id IO_group_name config_node UPS_unique_id hardware iscsi_name iscsi_alias panel_name enclosure_id canister_id enclosure_serial_number 1 node_159072 100014P293 500507680100C109 offline 0 io_grp0 no 2040000044802243 CG8 iqn.1986-03.com.ibm:2145.splitcluster1.node159072 159072 2 node_159680 100013I066 500507680100C478 online 0 io_grp0 yes 2040000043640186 CG8 iqn.1986-03.com.ibm:2145.splitcluster1.node159680 159680 IBM_2145:Split_Cluster_1:admin>rmnode node_159072 IBM_2145:Split_Cluster_1:admin>lsnode id name UPS_serial_number WWNN status IO_group_id IO_group_name config_node UPS_unique_id hardware iscsi_name iscsi_alias panel_name enclosure_id canister_id enclosure_serial_number 2 node_159680 100013I066 500507680100C478 online 0 io_grp0 yes 2040000043640186 CG8 iqn.1986-03.com.ibm:2145.splitcluster1.node159680 159680

Or by using the GUI as shown in Figure C-21 on page 939.

938

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 13 Split IO Group MASSIMO.fm

Figure C-21 Remove node using GUI

6. First identify which copy id is offline for each volume using the SVC CLI command as shown in Example C-21.
Example: C-21 How to identify Volume copy id IBM_2145:Split_Cluster_1:admin>lsvdisk 0 id 0 name SLES_3650_10_01 IO_group_id 0 IO_group_name io_grp0 status degraded mdisk_grp_id many mdisk_grp_name many . lines omitted for brevity . copy_id 0 status online sync yes primary yes mdisk_grp_id 1 mdisk_grp_name DS3400_03_11 . lines omitted for brevity . copy_id 1 status offline . lines omitted for brevity . tier generic_hdd tier_capacity 100.00GB

Or by using the GUI as shown in Figure C-22.

Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines

939

7933 13 Split IO Group MASSIMO.fm

Draft Document for Review January 17, 2012 6:10 am

Figure C-22 Identify the offline copy id

i. Remove each identified offline Volume Mirroring copy with the SVC CLI command as shown in Example C-22.
Example: C-22 rmvdiskcopy IBM_2145:Split_Cluster_1:admin>rmvdiskcopy -copy 1 SLES_3650_10_01 IBM_2145:Split_Cluster_1:admin>lsvdisk id 0 name SLES_3650_10_01 status degraded . Lines omitted for brevity . copy_id 0 status online sync yes primary yes Lines omitted for brevity SLES_3650_10_01

Or by using the GUI as shown in Figure C-23.

940

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 13 Split IO Group MASSIMO.fm

Figure C-23 Delete copy

7. Power on the new node and leave the FC cable disconnected 8. Change the new node WWNN using the following procedure: a. Power-on the replacement node from the front panel with the Fibre Channel cables and the Ethernet cable disconnected. Once the node has booted, you may receive error 540, An Ethernet port has failed on the 2145 and/or error 558, The 2145 cannot see the fibre-channel fabric or the fibre-channel card port speed might be set to a different speed than the Fibre Channel fabric. This is to be expected as the node was booted with no fiber-optic cables connected and no LAN connection. If you see Error 550, Cannot form a cluster due to a lack of cluster resources, then this node still thinks it is part of an SVC clustered system. If this is a new node from IBM this should not have occurred. Change the WWNN of the replacement node to match the WWNN that you recorded earlier by following these steps: b. From the front panel of the new node, press the down button until the Node: panel is displayed and then use the right or left navigation button to display the Node WWNN: panel. Press and hold the down button, press and release the select button and then release the down button. On line one should be Edit WWNN: and on line two are the last five numbers of this new nodes WWNN. c. Press and hold the down button, press and release the select button and then release the down button to enter WWNN edit mode. The first character of the WWNN is highlighted.

Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines

941

7933 13 Split IO Group MASSIMO.fm

Draft Document for Review January 17, 2012 6:10 am

Note: When changing the WWNN you may receive error 540, An Ethernet port has failed on the 2145 and/or error 558, The 2145 cannot see the FC fabric or the FC card port speed might be set to a different speed than the Fibre Channel fabric. This is to be expected as the node was booted with no fiber-optic cables connected and no LAN connection. However, if this error occurs while you are editing the WWNN, you will be taken out of edit mode with partial changes saved. You will need to reenter edit mode by starting again at step b. d. Press the up or down button to increment or decrement the character that is displayed. The characters wrap F to 0 or 0 to F. e. Press the left navigation button to move to the next field or the right navigation button to return to the previous field and repeat step d for each field. At the end of this step, the characters that are displayed must be the same as the WWNN you recorded in step a. f. Press the select button to retain the characters that you have updated and return to the WWNN panel. g. Press the select button again to apply the characters as the new WWNN for the node. Important: You must press the select button twice as steps f and g instruct you to do. After step f it may appear that the WWNN has been changed, but it is step g that applies the change. h. Ensure the WWNN has changed by following steps a again. 9. Connect the node to the same FC switch ports as it was before the critical event. This is the most key point of the recovery procedure, because connecting the new SVC nodes to the same SAN ports and reusing the same SVC WWNN will avoid to reboot, rediscover, reconfigure or create any impact from the host point of view in order to see the lost disk resources and paths back to life. Important: Do not connect the new nodes to different ports at the switch or director as this will cause port ids to change which could impact hosts access to volumes or cause problems with adding the new node back into the clustered system. If you are not able to connect the SVC nodes to the same FC SAN ports as before, consider that you will be forced to reboot, rediscover or reconfigure your host in order to see the lost disk resources and bring the paths back to life. 10.Issue the SVC CLI command as shown in Example C-23 to verify that the last five characters of the WWNN are correct.
Example: C-23 Verify candidate node with correct WWNN IBM_2145:Split_Cluster_1:admin>lsnodecandidate id panel_name UPS_serial_number UPS_unique_id hardware 500507680100C109 159072 100014P293 2040000044802243 CG8

Important: If the WWNN does not match the original nodes WWNN exactly as recorded you must repeat steps 8b to 8g. 11.Add the node to the clustered system and ensure it is added back to the same I/O group as the original node with the SVC CLI commands as shown in Example C-24.

942

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 13 Split IO Group MASSIMO.fm

Example: C-24 Adding node IBM_2145:Split_Cluster_1:admin>addnode -wwnodename 500507680100C109 -iogrp 0 Node, id [3], successfully added IBM_2145:Split_Cluster_1:admin>lsnode id name UPS_serial_number WWNN status IO_group_id IO_group_name config_node UPS_unique_id hardware iscsi_name iscsi_alias panel_name enclosure_id canister_id enclosure_serial_number 3 node_159072 100014P293 500507680100C109 online 0 io_grp0 no 2040000044802243 CG8 iqn.1986-03.com.ibm:2145.splitcluster1.node159072 159072 2 node_159680 100013I066 500507680100C478 online 0 io_grp0 yes 2040000043640186 CG8 iqn.1986-03.com.ibm:2145.splitcluster1.node159680 159680

Or by using the GUI as shown in Figure C-24.

Figure C-24 Adding node

12.Verify that all volumes for this I/O group are back online and are no longer degraded. If the node replacement process is being done disruptively, such that no I/O is occurring to the I/O group, you still need to wait a period of time (we recommend 30 minutes) to make sure the new node is back online and available to take over before you do the next node in the I/O group. Use the SVC CLI command shown in Example C-25 to verify that all volumes for this I/O group are back online and are no longer degraded.
Example: C-25 No longer degraded volume IBM_2145:Split_Cluster_1:admin>lsvdisk -filtervalue status=degraded IBM_2145:Split_Cluster_1:admin>

Or by using the GUI as shown in Figure C-25 on page 944.

Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines

943

7933 13 Split IO Group MASSIMO.fm

Draft Document for Review January 17, 2012 6:10 am

Figure C-25 Volume no longer degraded

13.Discover the new MDisk supplied by new back-end storage subsystem. They will appear as status online with a mode of unmanaged as shown in Example C-26.
Example: C-26 New MDisk discovered IBM_2145:Split_Cluster_1:admin>detectmdisk IBM_2145:Split_Cluster_1:admin>lsmdisk -filtervalue mode=unmanaged id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID tier 0 DS3400_09_11 online unmanaged 300.0GB 0000000000000000 DS3400_09 600a0b8000369a8100000ac44e4dd8cc00000000000000000000000000000000 generic_hdd 1 DS3400_09_12 online unmanaged 300.0GB 0000000000000001 DS3400_09 600a0b80003743e800000e184e4ddbbb00000000000000000000000000000000 generic_hdd 2 DS3400_09_21 online unmanaged 300.0GB 0000000000000002 DS3400_09 600a0b8000369a8100000ac74e4dd8fa00000000000000000000000000000000 generic_hdd 3 DS3400_09_22 online unmanaged 300.0GB 0000000000000003 DS3400_09 600a0b80003743e800000e1a4e4ddbe900000000000000000000000000000000 generic_hdd 13 mdisk0 online unmanaged 600.0GB 0000000000000000 DS5020 60080e5000182ec60000b07e4e560bfc00000000000000000000000000000000 generic_hdd 14 mdisk1 online unmanaged 600.0GB 0000000000000001 DS5020 60080e5000182ec60000b0834e560c3b00000000000000000000000000000000 generic_hdd 15 mdisk2 online unmanaged 600.0GB 0000000000000002 DS5020 60080e5000182ec60000b0814e560c1e00000000000000000000000000000000 generic_hdd

944

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 13 Split IO Group MASSIMO.fm

16 mdisk3 DS5020 generic_hdd

online unmanaged 600.0GB 0000000000000003 60080e5000182ec60000b0864e560c5800000000000000000000000000000000

Or by using the GUI as shown in Figure C-26.

Figure C-26 Newly discovered MDisk

14.Add the MDisks to the storage pool using the SVC CLI commands as shown in Example C-27 and recreate the MDisk to storage pool relationship that existed before the critical event. Important: You should remove the previous MDisk(s) that are still defined in each storage pool but are no longer physically existing after the critical event (they may appear in an offline or degraded state to the SVC), before you add the newly discovered MDisk(s).
Example: C-27 Adding new MDisk to Storage pool IBM_2145:Split_Cluster_1:admin>addmdisk -mdisk DS3400_09_11 DS3400_09_11 IBM_2145:Split_Cluster_1:admin>addmdisk -mdisk DS3400_09_12 DS3400_09_12 IBM_2145:Split_Cluster_1:admin>addmdisk -mdisk DS3400_09_21 DS3400_09_21

Or by using the GUI as shown in Figure C-27 on page 946.

Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines

945

7933 13 Split IO Group MASSIMO.fm

Draft Document for Review January 17, 2012 6:10 am

Figure C-27 Adding new MDisk

Important: After you have re-added your newly discovered MDisks to the storage pool, the three quorum tandem will be automatically fixed. We can check this with the SVC CLI command as shown in Example C-28.
Example: C-28 Quorum status IBM_2145:Split_Cluster_1:admin>lsquorum quorum_index status id name controller_id override 0 online 8 DS4700_SVC_Q1 2 1 online 0 DS3400_09_11 1 2 online 4 DS3400_03_11 0 controller_name active object_type DS4700 DS3400_09 DS3400_03 yes no no mdisk mdisk mdisk yes yes yes

15.Reactivate Volume Mirroring for each volume in accordance with your Volume Mirroring requirements in order to recreate the same business continuity infrastructure as before the critical event using the SVC CLI command as shown in Example C-29.
Example: C-29 addvdiskcopy example IBM_2145:Split_Cluster_1:admin>addvdiskcopy -mdiskgrp DS3400_09_12 SLES_3650_11_02 Vdisk [3] copy [1] successfully created

Or by using the GUI as shown in Figure C-28 on page 947 and in Figure C-29 on page 947.

946

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 13 Split IO Group MASSIMO.fm

Figure C-28 Add volume copy 1

Figure C-29 Add volume copy 2

16.Check the status of your Volume Mirroring synchronization progress with the SVC CLI command as shown in Example C-30.
Example: C-30 lsvdisksyncprogress example IBM_2145:Split_Cluster_1:admin>lsvdisksyncprogress vdisk_id vdisk_name copy_id progress estimated_completion_time 0 SLES_3650_10_01 1 14 111012121307 1 SLES_3650_11_01 1 13 111012121421 2 SLES_3650_10_02 1 13 111012121455 3 SLES_3650_11_02 1 11 111012121709

It is possible to speed up the synchronization progress with the chvdisk command but the more speed you give to the synchronization process, the more impact on the overall performance you may have. 17.Now with the following SVC CLI command, you could consider rebalancing your Split I/O Group configuration in order to have the Volume Mirroring Primary copy related with the storage pool and preferred node as it was before the critical event even if they are now
Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines

947

7933 13 Split IO Group MASSIMO.fm

Draft Document for Review January 17, 2012 6:10 am

in the same site. Doing that will help you with an eventual future stretch of your configuration when a new remote site becomes available. You can do that using SVC CLI command as shown in Example C-31.
Example: C-31 Change Volume primary copy id IBM_2145:Split_Cluster_1:admin>chvdisk -primary 1 SLES_3650_11_01

Or by using the GUI as shown in Figure C-30.

Figure C-30 Make primary with GUI

You have completed your procedure to recover a Split I/O Group configuration after a critical event. At this point all your volumes will be accessible from your hosts point of view and the recovery action has not impacted your applications.

Recovery Guidelines for ISL configuration


All the Recovery Guidelines described in the previous section named , Recovery Guidelines for No ISL configuration on page 937 will also be valid when you have implemented a Split I/O Group configuration with an ISL. Figure C-31 on page 949 shows an example of a Split I/O Group with ISL.

948

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933 13 Split IO Group MASSIMO.fm

Figure C-31 ISL configuration with active/passive WDM

If you have implemented a Split I/O Group configuration with ISL in addition to the steps described in , Recovery Guidelines for No ISL configuration on page 937 you will have to: 1. Restore your SAN configuration (Private and Public) according to your documentation 2. Restore your active/passive WDM configuration in order to re-establish the ISL between the two sites.

Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines

949

7933 13 Split IO Group MASSIMO.fm

Draft Document for Review January 17, 2012 6:10 am

950

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933bibl.fm

Related publications
The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this book.

IBM Redbooks
The following IBM Redbooks publications provide additional information about the topic in this document. Note that some publications referenced in this list might be available in softcopy only. Introduction to Storage Area Networks, SG24-5470 IBM System Storage: Implementing an IBM SAN, SG24-6116 DS4000 Best Practices and Performance Tuning Guide, SG24-6363 IBM System Storage Business Continuity: Part 1 Planning Guide, SG24-6547 IBM System Storage Business Continuity: Part 2 Solutions Guide, SG24-6548 Get More Out of Your SAN with IBM Tivoli Storage Manager, SG24-6687 IBM Tivoli Storage Area Network Manager: A Practical Introduction, SG24-6848 DS8000 Performance Monitoring and Tuning, SG24-7146 Monitoring Your Storage Subsystems with TotalStorage Productivity Center, SG24-7364 Using the SVC for Business Continuity, SG24-7371 SAN Volume Controller: Best Practices and Performance Guidelines, SG24-7521 SAN Volume Controller V4.3.0 Advanced Copy Services, SG24-7574 IBM XIV Storage System: Architecture, Implementation and Usage, SG24-7659 IBM Tivoli Storage Productivity Center V4.1 Release Guide, SG24-7725 IBM SAN Volume Controller 4.2.1 Cache Partitioning, REDP-4426

Other publications
These publications are also relevant as further information sources: IBM System Storage SAN Volume Controller: Planning Guide, GA32-0551 IBM System Storage Open Software Family SAN Volume Controller: Planning Guide, GA22-1052 IBM System Storage SAN Volume Controller: Service Guide, GC26-7901 IBM System Storage SAN Volume Controller Model 2145-8A4 Hardware Installation Guide, GC27-2219 IBM System Storage SAN Volume Controller Model 2145-8G4 Hardware Installation Guide, GC27-2220 IBM System Storage SAN Volume Controller Models 2145-8F2 and 2145-8F4 Hardware Installation Guide, GC27-2221

Copyright IBM Corp. 2011. All rights reserved.

951

7933bibl.fm

Draft Document for Review January 17, 2012 6:10 am

IBM SAN Volume Controller Software Installation and Configuration Guide, GC27-2286 IBM System Storage SAN Volume Controller Command-Line Interface Users Guide, GC27-2287 IBM System Storage Master Console: Installation and Users Guide, GC30-4090 Multipath Subsystem Device Driver Users Guide, GC52-1309 IBM System Storage SAN Volume Controller Model 2145-CF8 Hardware Installation Guide, GC52-1356 IBM System Storage Productivity Center Software Installation and Users Guide, SC23-8823 IBM System Storage Productivity Center Introduction and Planning Guide, SC23-8824 Subsystem Device Driver Users Guide for the IBM TotalStorage Enterprise Storage Server and the IBM System Storage SAN Volume Controller, SC26-7540 IBM System Storage Open Software Family SAN Volume Controller: Installation Guide, SC26-7541 IBM System Storage Open Software Family SAN Volume Controller: Service Guide, SC26-7542 IBM System Storage Open Software Family SAN Volume Controller: Configuration Guide, SC26-7543 IBM System Storage Open Software Family SAN Volume Controller: Command-Line Interface Users Guide, SC26-7544 IBM System Storage Open Software Family SAN Volume Controller: CIM Agent Developers Reference, SC26-7545 IBM System Storage Open Software Family SAN Volume Controller: Host Attachment Guide, SC26-7563 Command-Line Interface Users Guide, SC27-2287 IBM System Storage Productivity Center Users Guide Version 1 Release 4, SC27-2336 IBM TotalStorage Multipath Subsystem Device Driver Users Guide, SC30-4096 IBM System Storage SAN Volume Controller V5.1.0 - Host Attachment Guide, SG26-7905 IBM Tivoli Storage Productivity Center IBM Tivoli Storage Productivity Center for Replication Installation and Configuration Guide, SC27-2337 IBM TotalStorage Multipath Subsystem Device Driver Users Guide, SC30-4096

Online resources
These websites are also relevant as further information sources: IBM TotalStorage home page http://www.storage.ibm.com SAN Volume Controller supported platform http://www-1.ibm.com/servers/storage/support/software/sanvc/index.html Download site for Windows Secure Shell (SSH) freeware http://www.chiark.greenend.org.uk/~sgtatham/putty

952

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933bibl.fm

IBM site to download SSH for AIX http://oss.software.ibm.com/developerworks/projects/openssh Open source site for SSH for Windows and Mac http://www.openssh.com/windows.html Cygwin Linux-like environment for Windows http://www.cygwin.com IBM Tivoli Storage Area Network Manager site http://www-306.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageAreaNe tworkManager.html Microsoft Knowledge Base Article 131658 http://support.microsoft.com/support/kb/articles/Q131/6/58.asp Microsoft Knowledge Base Article 149927 http://support.microsoft.com/support/kb/articles/Q149/9/27.asp Sysinternals home page http://www.sysinternals.com Subsystem Device Driver download site http://www-1.ibm.com/servers/storage/support/software/sdd/index.html IBM TotalStorage Virtualization home page http://www-1.ibm.com/servers/storage/software/virtualization/index.html SVC support page http://www-947.ibm.com/systems/support/supportsite.wss/selectproduct?taskind=4& brandind=5000033&familyind=5329743&typeind=0&modelind=0&osind=0&psid=sr&continu e.x=1 SVC online documentation http://publib.boulder.ibm.com/infocenter/svcic/v3r1m0/index.jsp lBM Redbooks publications about SVC http://www.redbooks.ibm.com/cgi-bin/searchsite.cgi?query=SVC

Help from IBM


IBM Support and downloads ibm.com/support IBM Global Services ibm.com/services

Related publications

953

7933bibl.fm

Draft Document for Review January 17, 2012 6:10 am

954

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933IX.fm

Index
Symbols
?????? 386 backup time 374 balance 84 bandwidth 68, 84, 9697, 445, 881, 884 bandwidth impact 459 basic setup requirements 137 BB 905 bind 224 bit array 374 bitmap 37, 374 bitmap space 27 block-level protocol 156 boot 100 boss node 39 bottleneck 60 bottlenecks 102103 budget 30 budget allowance 30 buffers 907 buffer-to-buffer 905906 burst traffic 903 business requirements 102, 880

Numerics
10 GbE 156

A
active quorum disk 40 add a node 527 add additional ports 671 add an HBA 484 Add SSH Public Key 136 addressable extents 19 administration tasks 659 Advanced Copy Services 94 Advanced Settings 664, 827, 829 AIX host system 171 AIX specific information 162 AIX toolbox 171 AIX-based hosts 162 alias 31 alias string 157 aliases 31 analysis 102, 880 application server guidelines 93 application testing 375 assign VDisks 502 assigned VDisk 166 asynchronous notifications 398399 asynchronous remote 435 asynchronous remote copy 36, 412, 435436 asynchronous replication 457 asynchronously 435 authentication 53, 139, 159 authentication service 56 automate tasks 511 automatic Linux system 200 automatic update process 200 automatically discover 472 automation 511 auxiliary 446, 569, 592 auxiliary VDisk 436, 447, 454 available managed disks 473

C
cable connections 73 cable length 59 cache 25, 42, 389, 436, 881 cache disabled 25 cache mode 904 caching 103, 881 caching capability 102, 880 candidate node 528 capacity 91 capacity measurement 685 CDB 31 Challenge Handshake Authentication Protocol 81 challenge message 34 Challenge-Handshake Authentication Protocol 34, 159, 482 change the IP addresses 522 change volumes 595 changes 374 Channel extender 892 channel extender 897 CHAP 34, 81, 159, 482 CHAP authentication 34, 159, 667 CHAP secret 34, 159, 667 chpartnership 459 chrcconsistgrp 461 chrcrelationship 461 chunks 89, 233 CIM agent 43 CIM Client 42 CIMOM 32, 42, 159 CLI 134, 579

B
back-end application 894 background copy 427, 434, 447, 454 background copy bandwidth 459 background copy progress 564, 587 background copy rate 395396 backup 95, 374 of data with minimal impact on production 382

Copyright IBM Corp. 2011. All rights reserved.

955

7933IX.fm

Draft Document for Review January 17, 2012 6:10 am

commands 171 scripting for SVC task automation 511 cloning 95 cluster 38 creation 527 IP address 115 shutting down 471, 524, 536 time zone 522523 cluster (SVC) 892 cluster overview 38 cluster partnership 418 cluster shared disk 183 clustered ethernet port 160 clustered server resources 38 clustered system 76 clustered system configuration 86 cluster-level statistics 882 clusters 68 cold 352 collection interval 881 Colliding writes 437 colliding writes 438 Command Descriptor Block 31 command syntax 515 COMPASS architecture 57 compression 100 concepts 9 concurrent instances 232 concurrent software upgrade 605 config node 882 configurable warning capacity 29 configuration 149 configuration data 16 configuration node 33, 39, 160, 527, 892 configure AIX 162 configure SDD 224 configuring the GUI 118 connected 421422, 448 connected state 424, 449, 451 connectivity 41 consistency 450 consistency freeze 424, 432, 451 Consistency Group 382, 384, 892 consistency group 383 limits 385 consistent 422423, 449450 consistent data set 375 Consistent Stopped state 420, 447 Consistent Synchronized state 421, 447 ConsistentDisconnected 426, 453 ConsistentStopped 424, 451 ConsistentSynchronized 425, 452 container 89 contingency capacity 29 controller, renaming 471 conventional storage 227 cookie crumbs recovery 629 cooling 69 copied state 892 copy bandwidth 97, 459

copy operation 37 copy process 432, 461 copy rate 396 copy rate parameter 95 Copy Services managing 537538 COPY_COMPLETED 398 copying state 543 core-edge 880 counterpart SAN 104, 893, 897 counters 882 CPU cycle 60 CPU utilization 884 create a FlashCopy 540 create a new VDisk 684 create an SVC partnership 768 create mapping command 539540, 739, 751 create SVC partnership 557, 580 creating a VDisk 488 creating managed disk groups 645 credits 905906 current cluster state 40 cycling 595 cycling mode 595, 601 cycling period 595 cyclingmode 597 Cygwin 190

D
data backup with minimal impact on production 382 moving and migration 375 source 385 data change rates 100 data consistency 538 data corruption 450 data flow 77 data migration 69, 232 data migration and moving 375 Data Migration Planner 353 Data Migrator 353 data mining 376 data mover appliance 504 Data Placement Advisor 353 degraded mode 87 delete a FlashCopy 547 a host 484 a host port 486 a port 674, 678, 700, 704 a VDisk 500, 697 ports 485 Delete consistency group command 548 dependent writes 384, 442 destaged 42 destructive 616 detect the new MDisks 472 detected 472 Device Mapper Multipath 207 differentiator 61

956

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933IX.fm

differing storage 880 directory protocol 56 dirty bit 427, 454 disconnected 421422, 448 disconnected state 449 discovering assigned VDisk 166 discovering newly assigned MDisks 645, 653 disk access profile 498 disk controller renaming 642 systems 470 viewing details 470, 641 disk internal controllers 61 disk timeout value 218 disk zone 76 diskpart, see 183 display summary information 473 distance 411, 895 distance extenders 907 distance limitations 411 DM-MPIO 207 documentation 68, 641 dual-redundant ISLs 76 dump I/O statistics 618 I/O trace 618 listing 617 other nodes 619 durability 61 dynamic pathing 221222 dynamic shrinking 705 dynamic tracking 163

expand a VDisk 182, 500 a volume 183 expand a space-efficient VDisk 500 extended distance solutions 411 extended quorum disks 909 extenders 96 extending the distance 96 Extent 893 extent 89, 228 extent level 228 extent migration plan 21 extent size 1920 extent sizes 89 extents 20 allocation 352

F
fabric remote 104 fabric interconnect 895 failover 221, 436 failover only 203 failover situation 411 fast fail 163 FAStT 410 FC optical distance 59 features, licensing 616 featurization log 617 Fibre Channel interfaces 59 Fibre Channel port fan in 104, 897 Fibre Channel Port Login 32 Fibre Channel port logins 894 Fibre Channel ports 73 file system 206 filtering 515, 636 filters 515 FlashCopy 37, 95, 374 bitmap 386 how it works 377, 381 image mode disk 390 indirection layer 385 mapping 376 mapping events 391 serialization of I/O 397 synthesis 397 FlashCopy indirection layer 385 FlashCopy mapping 382 FlashCopy mapping states 393 Copying 393 Idling/Copied 393 Prepared 394 Preparing 394 Stopped 393 Suspended 394 FlashCopy mappings 384 FlashCopy properties 385 FlashCopy rate 95 flexibility 102, 880 flow control 905 Index

E
Easy Tier 21 Easy Tier operating modes 353 elapsed time 95 empty MDG 476 empty state 427, 454 Enterprise Storage Server (ESS) 410 entire VDisk 382 error 424, 448, 451, 474, 615 Error Code 893 error handling 397 Error ID 893 error log 614 error notification 613 ESS (Enterprise Storage Server) 410 ESS to SVC 237 eth0 60 eth1 60 Ethernet 73 Ethernet connection 74 Ethernet ports 81 event 614 event log 617 events 420, 446 Excluded 893 excludes 656 Execute Metro Mirror 563, 585

957

7933IX.fm

Draft Document for Review January 17, 2012 6:10 am

foreground I/O latency 459 free extents 500 Freeze time 595 frequency 881 front-end application 894 FRU 894 Full Feature Phase 32 fully allocated 27

Host ID 894 host mapping 22 host object 155 Host Type 664, 667 hot 352 HP-UX support information 221222

I
I/O budget 30 I/O governing 30, 498 I/O governing rate 498 I/O Group 895 I/O group 895 renaming 531 viewing details 531 I/O load 880 I/O Monitoring 353 I/O pair 70 I/O per secs 68 I/O statistics dump 618 I/O trace dump 618 ICAT 4243 identical data 410, 446 idling 425, 452 idling state 432, 461 IdlingDisconnected 425, 452 image 20 Image Mode 895 image mode 235, 726 image mode disk 390 image mode MDisk 235 Image Mode Migration 38 image mode to image mode 268 image mode VDisk 230 image mode virtual disks 92 inappropriate zoning 84 inconsistent 422, 449 Inconsistent Copying state 421, 447 Inconsistent Stopped state 420, 447 InconsistentCopying 424, 451 InconsistentDisconnected 426, 453 InconsistentStopped 424, 450 incremental 37 independent power supply 903 indirection layer 374, 385 indirection layer algorithm 387 informational error logs 398 initiator 156 initiator name 31 initiator port 664, 667 input power 524 install 67 insufficient bandwidth 396 integrity 383384 interaction with the cache 389 intercluster link 418 intercluster link bandwidth 459 intercluster link maintenance 418419, 444 intercluster Metro Mirror 411, 435 intercluster zoning 418419, 444

G
gateway IP address 115 GBICs 895 general housekeeping 641 generating output 516 generator 135 geographically dispersed 410 Global Mirror 36 Global Mirror guidelines 98 Global Mirror relationship 438 Global Mirror remote copy technique 435 gminterdelaysimulation 456 gmintradelaysimulation 456 gmlinktolerance 456457 governing 30 governing rate 30 graceful manner 529 grain 386, 894 grain size 95 grain sizes 95 grains 27, 95, 396 granularity 382 GUI 138

H
Hardware initiator 157 Hardware Management Console 43 hardware nodes 57 hardware overview 57 HBA 480, 894 HBA ports 94 heartbeat 76, 79 heartbeat signal 41 heartbeat traffic 97 heatmap 352 help 641 high availability 38, 68 high-bandwidth link 595 home directory 171 host and application server guidelines 93 configuration 149 creating 480 deleting 670 information 661 showing 508 systems 75 host adapter configuration settings 173 host bus adapter 480, 906 Host failover 161

958

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933IX.fm

interfaces 884 internal counters 882 Internet Storage Name Service 34, 159, 895 interswitch link (ISL) 896 interval 524 intracluster Metro Mirror 410, 435 IP address modifying 521, 841 IP addresses 69, 841, 843, 847 IP subnet 74 ipconfig 144 IPv4 143 IPv6 143 IPv6 addresses 144 IQN 22, 31, 81, 157, 894 IQNs 31 iSCSI 30, 60, 68, 156 iSCSI Address 31 iSCSI client 156 iSCSI HBA 157 iSCSI IP address failover 160 iSCSI Multipathing 34 iSCSI Name 31 iSCSI name 157 iSCSI node 31 iSCSI nodes 157 iSCSI Qualified Name 31, 157, 894 iSCSI qualified name 81 iSCSI Send Target 34 iSCSI session 32 iSCSI Simple Name Server 81 iSCSI target node failover 160 iSCSI traffic 156 iSCSI volume discovery 33 ISL (interswitch link) 896 ISL hop count 411, 435 ISL load 880 ISL Trunking 880 iSNS 34, 81, 159, 895 issue CLI commands 190

LDAP 6, 45, 55 lease expiry 76 license 115 licensing feature 616 licensing feature settings 616 Lightweight Directory Access Protocol 55 limiting factor 102 linear 881 link errors 59 Linux 171 Linux kernel 39 Linux on Intel 199 list dump 617 listing dumps 617 Load balancing 203 Local authentication 44 local cluster 429, 455 Local fabric 895 local fabric interconnect 895 Local users 54 log 881 logged 614 Logical Block Address 427, 454 logical block address 187 logical configuration data 621 logical disks 20 Login Phase 32 logins 664, 667 lower tier 352 lower-bandwidth 6 lower-bandwidth remote mirroring 6 lsrcrelationshipcandidate 460 LU 895 LUN limitations 172 LUN masked 22 LUN masking 34 LUNs 895

M
magnetic disks 61 maintenance levels 173 maintenance tasks 605 Managed 895 Managed disk 895 managed disk 895 working with 641 managed disk group 476 creating 645 viewing 647 Managed Disks 895 managed mode MDisk 235 managed mode to image mode 263 managed mode virtual disk 92 management 102, 880 map a VDisk to a host 501 mapping 381 mapping events 391 Master 895 master 446 master console 69 Index

J
jumbo frames 34

K
kernel level 200 key 159 key files on AIX 171

L
LAN Interfaces 59 LAN segment 86 last extent 237 latency 97 latency restrictions 6 layer 95 layers 95 LBA 187, 427, 454

959

7933IX.fm

Draft Document for Review January 17, 2012 6:10 am

master VDisk 447, 454 masterchange 597 MC 895 MDG 895 MDG level 476 MDGs 69 MDisk 69, 895 adding 476, 650, 654 discovering 471, 653 including 474, 656 information 650 modes 235 name parameter 473 removing 480, 650, 655 renaming 474, 652 showing 507 showing in group 476 MDisk group creating 479, 645 deleting 479, 649 renaming 479, 648 showing 476, 507 viewing information 478 MDiskgrp 895 MDisks 884 Metro Mirror 36, 410 Metro Mirror consistency group 430431, 433434, 460463 Metro Mirror features 412, 436 Metro Mirror process 445 Metro Mirror relationship 430, 432, 434, 438, 460461, 463 microcode 41 Microsoft Active Directory 55 Microsoft Cluster 183 Microsoft Multi Path Input Output 173 Microsoft Volume Shadow Copy Service 191 migrate 227 migrate a VDisk 230 migrate between MDGs 230 migrate data 235 migrate VDisks 503 migrating multiple extents 228 migration algorithm 233 functional overview 232 operations 228 overview 228 tips 237 migration activities 228 migration plan 352 migration process 504 migration progress 232 migration report 22 migration threads 228 mirrored 436 mirrored copy 435 mirrored volume 26 mkpartnership 459 mkrcconsistgrp 460

mkrcrelationship 460 MLC 60 modify a host 483 modifying a VDisk 497 mount 206 mount point 206 moving and migrating data 375 MPIO 93, 161, 173 MSCS 183 MTU 34 MTU sizes 34 multi layer cell 60 multipath I/O 93 multipath storage solution 174 multipathing device driver 93 multipathing driver 161 Multipathing drivers 34 multiple disk arrays 102, 880 multiple extents 228 multiple paths 34 multiple virtual machines 212 multiprotocol routers 96

N
network bandwidth 100 Network Entity 157 network interface cards 156 Network Portals 157 Network Time Protocol 853 new mapping 501 NICs 156 Node 896 node 39, 526 adding 527 deleting 528 failure 397 port 894 renaming 528 shutting down 529 viewing details 526 node details 526 node dumps 619 node level 526 Node Unique ID 39 nodes 68 non-preferred path 221 non-redundant 893 non-zero contingency 29 N-port 896 NTP 853

O
offline rules 230 offload features 33 older disk systems 103 on screen content 515, 636 online help 641 on-screen content 515 OpenSSH 171

960

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933IX.fm

OpenSSH client 190 operating modes 353 operating system versions 173 ordering 384 organizing on-screen content 515 other node dumps 619 overall performance needs 68 overloaded 76 Oversubscription 896 oversubscription 880, 896 overwritten 381, 611

P
package numbering and version 605, 856 parallelism 232 partial last extent 236 partner node 160 partnership 16, 456 passphrase 135 path failover 221 path failure 398 path offline 398 path offline for source VDisk 398 path offline for target VDisk 398 path offline state 398 path-selection policy algorithms 203 peak 459 peak workload 97 pended 30 per cluster 232 per managed disk 233 per node statistics 882 performance 91, 880 performance advantage 102, 880 performance considerations 880881 performance function 21 performance improvement 102, 880 performance monitoring tool 98 performance requirements 68 performance scalability 38 performance statistics 98, 881 physical location 69 physical planning 69 physical rules 70 physical site 69 Physical Volume Links 222 PiT consistent data 374 PiT copy 386 planning rules 68 plink 512 PLOGI 32 Point-in-Time 37 point-in-time copy 423, 450 policing 30 policy decision 428, 454 port adding 484, 671 deleting 485, 674 port binding 224 Port Mask 664, 667

port mask 94, 155 port masking 155 port speeds 78 PortChanneling 880 Power Systems 171 PPRC background copy 427, 434, 454 commands 428, 455 configuration limits 455 detailed states 423, 450 preferred access node 92 preferred node 25, 902 preferred path 221 pre-installation planning 68 Prepare 896 prepare (pre-trigger) FlashCopy mapping command 541 PREPARE_COMPLETED 398 preparing volumes 170 pre-trigger 541 primary 436, 569, 592 primary clustered system 99 primary copy 27 priority 504 priority setting 504 private key 133, 135, 171 production VDisk 454 provisioning 459 public key 133, 135, 171, 512 PuTTY 43, 134, 137, 525 CLI session 141 default location 135 security alert 142 putty 6 PuTTY application 141, 529 PuTTY Installation 190 PuTTY Key Generator 135136 PuTTY Key Generator GUI 134 PuTTY Secure Copy 608 PuTTY session 142 PuTTY SSH client software 190 PVLinks 222

Q
QLogic HBAs 200 Quality Of Service 30 Queue Full Condition 30 quiesce 525 quorum candidates 40 Quorum Disk 39 quorum disk 18, 39, 902903, 926 quorum disk candidate 40 quorum disk placement 902 quorum disks 87 quorum index 926 quorum status 925 quorum support 909

R
RAID 896 Index

961

7933IX.fm

Draft Document for Review January 17, 2012 6:10 am

RAID controller 7576 RAID mode 880 RAID size 880 RAMAC 61 RAS 896 real capacity 2829 real-time performance monitoring 884 real-time synchronized 410 reassign the VDisk 503 recall commands 468, 515 Recovery Point Objective 36 recovery point objective 6 recovery procedures 36 Redbooks website Contact us xxviii redundancy 60, 97 redundant 893 Redundant SAN 897 redundant SAN 897 relationship 382, 445 relationship state diagram 420, 446 reliability 91 Reliability, Availability, and Serviceability (RAS) 896 remote 897 Remote authentication 44 remote authentication 45 remote fabric 104, 895 interconnect 895 Remote users 55 remove a disk 187 remove an MDG 479 remove WWPN definitions 485 rename a disk controller 642 rename an MDG 648, 756, 778 rename an MDisk 652, 668, 691, 755, 778 repartitioning 91 replication 95 restart the cluster 525 restart the node 530 restarting 567, 591 restore points 377 Reverse FlashCopy 37, 377 RFC3720 31 rmrcconsistgrp 463 rmrcrelationship 463 rollback 37 round robin 92, 204, 221 round trip delay 907 round trip delay time 903 round-robin 20 round-robin fashion 25 RPO 6, 36, 595

S
sampling interval 881 SAN Boot Support 221, 223 SAN definitions 104 SAN fabric 75 SAN planning 73 SAN Volume Controller 897

documentation 641 general housekeeping 641 help 641 virtualization 42 SAN zoning 133 SATA 99 scalable 103, 881 scalable cluster architecture 881 SCM 61 scripting 428, 454, 511 scripts 183, 511, 891 SCSI 897 SCSI commands 156 SCSI Disk 895 SCSI primitives 471 SDD 9293, 162, 165, 169, 223 SDD (Subsystem Device Driver) 169, 201, 223, 240 SDD Dynamic Pathing 221 SDD installation 166 SDD package version 173 SDDDSM 172, 174 secondary 436 secondary clustered system 99 secondary site 68 secure session 529 Secure Shell (SSH) 133 Secure Shell connection 42 separate physical IP networks 60 sequential 20, 92, 488 serialization 397 serialization of I/O by FlashCopy 397 Service Location Protocol 34, 159, 897 set up Metro Mirror 555, 579 SEV 498 shells 511 short-term status information 884 shrink a VDisk 705 shrinking 705 shrinkvdisksize 505 shut down 183 shut down a single node 529 shut down the cluster 524, 795 Simple Network Management Protocol 37, 428, 454, 474 single layer cell 60 single point in time 37 single point of failure 897 single sign-on 43, 56 single-tiered storage pool 20 site 69, 410 SLC 60 SLP 34, 159, 897 SLP daemon 34 SNIA 2 SNMP 37, 428, 454, 474 SNMP alerts 656 SNMP manager 613 SNMP trap 398 Software initiator 157 software upgrade 605 software upgrade packages 856

962

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933IX.fm

Solid State Drive 39 Solid State Drives 58 solution guidelines 102 sort 639 sorting 639 source 396 space-efficient 6, 491 Space-efficient background copy 445 space-efficient VDisk 505 space-efficient volume 505 special migration 237 Split 87 split brain 18, 39, 901 split cluster 880 split I/O 6 split I/O Group 87 split per second 95 split-cluster 87 splitting the SAN 897 SPoF 897 spreading the load 91 SSD market 61 SSD solution 61 SSH 42, 512 SSH (Secure Shell) 133 SSH Client 43 SSH client 171, 190 SSH client software 133 SSH key 53 SSH keys 133, 137 SSH server 133 SSH-2 134 SSO 56 stack 234 stand-alone Metro Mirror relationship 561, 585 start (trigger) FlashCopy mapping command 543544, 760, 783 start a PPRC relationship command 432, 461 startrcrelationship 461 STAT 352 state 423424, 450451 connected 421, 448 consistent 422423, 449450 ConsistentDisconnected 426, 453 ConsistentStopped 424, 451 ConsistentSynchronized 425, 452 disconnected 421, 448 empty 427, 454 idling 425, 452 IdlingDisconnected 425, 452 inconsistent 422, 449 InconsistentCopying 424, 451 InconsistentDisconnected 426, 453 InconsistentStopped 424, 450 overview 420, 448 synchronized 423, 450 state fragments 422, 449 state overview 421, 455 state transitions 398, 448 states 396, 420, 446

statistics 524 statistics dump 618 Statistics file naming 882 statistics files 881 stop 448 stop FlashCopy consistency group 546, 763 stop FlashCopy mapping command 545 STOP_COMPLETED 398 stoprcconsistgrp 462 stoprcrelationship 461 storage 95 storage cache 41 storage capacity 68 Storage Class Memory 61 storage pool 18 storage tier 21 stripe VDisks 102, 880 striped 20 striped mode 20 striped VDisk 488 subnet mask IP address 115 Subsystem Device Driver (SDD) 169, 201, 223, 240 Subsystem Device Driver Device Specific Module 172 Subsystem Device Driver DSM 174 summary report 352 SUN Solaris support information 221 surviving node 529 suspended mapping 545 SVC basic installation 111 task automation 511 SVC cluster partnership 429, 456 SVC cluster software 857 SVC configuration 68 SVC Console 43 SVC device 898 SVC GUI 43 SVC installations 86 SVC master console 134 SVC node 87 SVC PPRC functions 412 SVC setup 150 SVC superuser 53 svcinfo 468, 626 svcinfo lsfreeextents 232 svcinfo lsmdiskextent 232 svcinfo lsmigrate 232 svcinfo lsVDiskextent 232 svctask 468, 515, 626 svctask mkfcmap 429432, 456, 459461, 539540, 745 switching copy direction 569, 592 switchrcconsistgrp 464 switchrcrelationship 463 symmetrical 2 symmetrical network 896 symmetrical virtualization 2 synchronized 410, 423, 446, 450 synchronizing 445 synchronous reads 234 synchronous writes 234

Index

963

7933IX.fm

Draft Document for Review January 17, 2012 6:10 am

synthesis 397 system management IP address 33 System Storage Productivity Center 897

T
T0 37 target 156 Target failover 160 target name 31 TCP/IP packets 156 thin-provisioned 27 threshold level 30 tie breaker 39 tie-break situations 39 tie-breaker 39 tier 18, 22 time 522 time zone 522523 timeout 218 Time-Zero 37 TIP 45 Tivoli Directory Server 55 Tivoli Embedded Security Services 45, 56 Tivoli Integrated Portal 43, 45 Tivoli Storage Productivity Center 43 Tivoli Storage Productivity Center for Data 43 Tivoli Storage Productivity Center for Disk 43 Tivoli Storage Productivity Center for Replication 43 Tivoli Storage Productivity Center Standard Edition 43 tlenecks 880 token facility 56 trace dump 618 traffic 97 traffic profile activity 68 transitions 235 trigger 543544

U
unallocated capacity 186 unallocated region 445 unconfigured nodes 527 undetected data corruption 450 uninterruptible power supply 73, 87, 524, 606 unmanaged MDisk 235 unmap a VDisk 503 up2date 200 updates 200 upgrade 856 upgrade precautions 605 upper tier 352 usage statistics 22 use of Metro Mirror 427, 454 used capacity 29 used free capacity 29 using SDD 169, 201, 223

assigning to host 501 creating 487, 491, 684 creating in image mode 491, 726 deleting 500, 697 discovering assigned 166 expanding 500 I/O governing 497 image mode migration concept 235 information 489 mapped to this host 503 migrating 93, 503, 713 modifying 497 path offline for source 398 path offline for target 398 showing 650 showing for MDisk 506 showing using group 506 shrinking 504, 726 working with 487 VDisk discovery 159 VDisk-to-host mapping 503 deleting 700 Veritas Volume Manager 221 View I/O Group details 531 viewing managed disk groups 647 virtual capacity 28 virtual disk 382, 487, 627, 679 Virtual Machine File System 212 virtualization 42 VLUN 895 VMFS 212214 VMFS datastore 216 Volume I/O governing 30 Volume Mirroring 89 Volume Mirroring Migration 38 volumes 20 target 385 Voting Set 39 voting set 39 vpath configured 168 VSS 191

W
warning capacity 29 warning threshold 505 web interface 224 Windows 2000 host configuration 172, 211 Windows 2000-based hosts 171 Windows host system CLI 190 Windows NT and 2000 specific information 171 working with managed disks 641 workload cycle 98 workloads 880 worldwide port name 164 Write data 42 Write ordering 450 write ordering 416, 442, 449 write performance 27 write through mode 87 write workload 98

V
VDisk 656

964

IBM System Storage SAN Volume Controller V6.3

Draft Document for Review January 17, 2012 6:10 am

7933IX.fm

write-through 904 write-through mode 42 WWPNs 164, 480, 485, 664, 673

X
XIV 86 xt 860

Y
YaST Online Update 200

Z
zero buffer 445 zero contingency 29 zero-detection algorithm 29 zone 75 zoning capabilities 76 zoning recommendation 179

Index

965

7933IX.fm

Draft Document for Review January 17, 2012 6:10 am

966

IBM System Storage SAN Volume Controller V6.3

To determine the spine width of a book, you divide the paper PPI into the number of pages in the book. An example is a 250 page book using Plainfield opaque 50# smooth which has a PPI of 526. Divided 250 by 526 which equals a spine width of .4752". In this case, you would use the .5 spine. Now select the Spine width for the book and hide the others: Special>Conditional Text>Show/Hide>SpineSize(-->Hide:)>Set . Move the changed Conditional text settings to all files in your book by opening the book file with the spine.fm still open and File>Import>Formats the

Conditional Text Settings (ONLY!) to the book files.


Draft Document for Review January 17, 2012 6:10 am

7933spine.fm

967

IBM System Storage SAN Volume Controller V6.3

(1.5 spine) 1.5<-> 1.998 789 <->1051 pages

IBM System Storage SAN Volume Controller V6.3


IBM System Storage SAN Volume Controller V6.3

(1.0 spine) 0.875<->1.498 460 <-> 788 pages

(0.5 spine) 0.475<->0.873 250 <-> 459 pages

IBM System Storage SAN Volume Controller V6.3

(0.2spine) 0.17<->0.473 90<->249 pages

(0.1spine) 0.1<->0.169 53<->89 pages

To determine the spine width of a book, you divide the paper PPI into the number of pages in the book. An example is a 250 page book using Plainfield opaque 50# smooth which has a PPI of 526. Divided 250 by 526 which equals a spine width of .4752". In this case, you would use the .5 spine. Now select the Spine width for the book and hide the others: Special>Conditional Text>Show/Hide>SpineSize(-->Hide:)>Set . Move the changed Conditional text settings to all files in your book by opening the book file with the spine.fm still open and File>Import>Formats the

Conditional Text Settings (ONLY!) to the book files.


Draft Document for Review January 17, 2012 6:10 am

7933spine.fm

968

IBM System Storage SAN Volume Controller V6.3

(2.5 spine) 2.5<->nnn.n 1315<-> nnnn pages

IBM System Storage SAN Volume Controller V6.3

(2.0 spine) 2.0 <-> 2.498 1052 <-> 1314 pages

Draft Document for Review January 17, 2012 6:10 am

Back cover

Implementing the IBM System Storage SAN Volume Controller V6.3


Install, use, and troubleshoot the SAN Volume Controller Become familiar with the exciting new GUI Learn how to use the Easy Tier function
This IBM Redbooks publication is a detailed technical guide to the IBM System Storage SAN Volume Controller (SVC) Version 6.1.0. SAN Volume Controller is a virtualization appliance solution which maps virtualized volumes that are visible to hosts and applications to physical volumes on storage devices. Each server within the storage area network (SAN) has its own set of virtual storage addresses that are mapped to physical addresses. If the physical addresses change, the server continues running using the same virtual addresses that it had before. Therefore, volumes or storage can be added or moved while the server is still running. The IBM virtualization technology improves the management of information at the block level in a network, thus enabling applications and servers to share storage devices on a network. This book is intended for readers who need to implement the SVC at a 6.1.0 release level with a minimum of effort.

INTERNATIONAL TECHNICAL SUPPORT ORGANIZATION

BUILDING TECHNICAL INFORMATION BASED ON PRACTICAL EXPERIENCE


IBM Redbooks are developed by the IBM International Technical Support Organization. Experts from IBM, Customers and Partners from around the world create timely technical information based on realistic scenarios. Specific recommendations are provided to help you implement IT solutions more effectively in your environment.

For more information: ibm.com/redbooks


SG24-7933-01 ISBN

También podría gustarte