Documentos de Académico
Documentos de Profesional
Documentos de Cultura
Jon Tate Alejandro Berardinelli Christian Schroeder Mark Chitti Massimo Rosati Torben Jensen
ibm.com/redbooks
7933edno.fm
International Technical Support Organization IBM System Storage SAN Volume Controller V6.3 October 2011
SG24-7933-01
7933edno.fm
Note: Before using this information and the product it supports, read the information in Notices on page xxi.
Second Edition (October 2011) This edition applies to Version 6.3 of the IBM System Storage SAN Volume Controller. This document created or updated on January 17, 2012. Note: This book is based on a pre-GA version of a product and may not apply when the product becomes generally available. We recommend that you consult the product documentation or follow-on versions of this redbook for more current information.
Copyright International Business Machines Corporation 2011. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
7933TOC.fm
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii Summary of changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii October 2011, Second Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv The team who wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xxviii Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xxviii Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xxviii Chapter 1. Introduction to storage virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Storage virtualization terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 User requirements driving storage virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 Benefits of using the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 What is new in SVC V6.3.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 5 5 6 6
Chapter 2. IBM System Storage SAN Volume Controller. . . . . . . . . . . . . . . . . . . . . . . . . 9 2.1 Brief history of the SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.2 SVC architectural overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.2.1 SAN Volume Controller topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.3 SVC terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.4 SAN Volume Controller components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.4.1 Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.4.2 I/O Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.4.3 System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.4.4 Split cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.4.5 MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.4.6 Quorum disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.4.7 Disk tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.4.8 Storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.4.9 Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.4.10 Easy Tier performance function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.4.11 Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.4.12 Maximum supported configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.5 Volume overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.5.1 Image mode volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.5.2 Managed mode volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.5.3 Cache mode and cache-disabled volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.5.4 Mirrored volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.5.5 Thin-provisioned volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.5.6 Volume I/O governing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.6 iSCSI overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.6.1 Use of IP addresses and Ethernet ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.6.2 iSCSI volume discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.6.3 iSCSI authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.6.4 iSCSI multipathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Copyright IBM Corp. 2011. All rights reserved.
7933TOC.fm
2.7 Advanced Copy Services overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7.1 Synchronous/Asynchronous remote copy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7.2 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7.3 Image Mode Migration and Volume Mirroring Migration . . . . . . . . . . . . . . . . . . . . 2.8 SVC clustered system overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8.1 Quorum disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8.2 Split I/O groups or split cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8.3 Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8.4 Clustered system management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8.5 IBM System Storage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9 User authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9.1 Remote authentication via LDAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9.2 SVC user names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9.3 SVC superuser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9.4 SVC Service Assistant Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9.5 SVC roles and user groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9.6 SVC local authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9.7 SVC remote authentication and single sign-on . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.10 SVC hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.10.1 Fibre Channel interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.10.2 LAN interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.11 Solid-state drives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.11.1 Storage bottleneck problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.11.2 Solid-state drive solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.11.3 Solid-state drive market . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.11.4 Solid-state drives and SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.12 Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.12.1 Evaluation mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.12.2 Automatic data placement mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.13 What is new with SVC 6.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.13.1 SVC 6.3 supported hardware list, device driver, and firmware levels . . . . . . . . . 2.13.2 SVC 6.3.0 new features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.14 Useful SVC web links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
35 36 37 38 38 39 41 41 42 43 44 45 53 53 53 53 54 55 57 59 59 60 60 61 61 62 62 63 63 63 63 64 65
Chapter 3. Planning and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 3.1 General planning rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.2 Physical planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.2.1 Preparing your uninterruptible power supply unit environment . . . . . . . . . . . . . . . 70 3.2.2 Physical rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 3.2.3 Cable connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.3 Logical planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.3.1 Management IP addressing plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 3.3.2 SAN zoning and SAN connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 3.3.3 iSCSI IP addressing plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 3.3.4 Back-end storage subsystem configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 3.3.5 SVC clustered system configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 3.3.6 Split-cluster system configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 3.3.7 Storage Pool configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 3.3.8 Virtual disk configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 3.3.9 Host mapping (LUN masking) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 3.3.10 Advanced Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 3.3.11 SAN boot support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 3.3.12 Data migration from a non-virtualized storage subsystem . . . . . . . . . . . . . . . . 101
vi
7933TOC.fm
3.3.13 SVC configuration backup procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 SAN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 Disk subsystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.3 SVC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.4 Performance monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 4. SAN Volume Controller initial configuration . . . . . . . . . . . . . . . . . . . . . . . 4.1 Managing the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 TCP/IP requirements for SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . 4.2 System Storage Productivity Center overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 IBM System Storage Productivity Center hardware . . . . . . . . . . . . . . . . . . . . . . 4.2.2 SVC installation planning information for System Storage Productivity Center . 4.3 Setting up the SVC cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Introducing the service panels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.3 Initiating cluster creation from the front panel . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Configuring the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 Completing the Create Cluster Wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.2 Changing the default superuser password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.3 Configuring the Service IP Addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.4 Postrequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Secure Shell overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.1 Generating public and private SSH key pairs using PuTTY . . . . . . . . . . . . . . . . 4.5.2 Uploading the SSH public key to the SVC cluster. . . . . . . . . . . . . . . . . . . . . . . . 4.5.3 Configuring the PuTTY session for the CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.4 Starting the PuTTY CLI session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.5 Configuring SSH for AIX clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 Using IPv6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.1 Migrating a cluster from IPv4 to IPv6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.2 Migrating a cluster from IPv6 to IPv4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 5. Host configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Host attachment overview for IBM System Storage SAN Volume Controller . . . . . . . 5.2 SVC setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Fibre Channel and SAN setup overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Port mask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Initiators and targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.2 iSCSI Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.3 iSCSI Qualified Name (IQN) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.4 iSCSI Setup for SVC and host server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.5 Volume discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.6 Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.7 Target failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.8 Host failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.9 Additional sources of information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 AIX-specific information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.1 Configuring the AIX host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.2 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . . 5.4.3 HBAs for IBM System p hosts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.4 Configuring fast fail and dynamic tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.5 Installing the 2145 host attachment support package. . . . . . . . . . . . . . . . . . . . .
101 102 102 102 103 104 105 106 106 108 110 110 111 111 115 115 118 118 128 131 132 133 134 136 137 141 143 143 144 147 149 150 150 151 155 156 156 157 157 158 159 159 160 161 162 162 162 163 163 163 165
Contents
vii
7933TOC.fm
5.4.6 Subsystem Device Driver Path Control Module . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.7 Configuring assigned volume using SDDPCM . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.8 Using SDDPCM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.9 Creating and preparing volumes for use with AIX V6.1 and SDDPCM. . . . . . . . 5.4.10 Expanding an AIX volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.11 Running SVC commands from an AIX host system . . . . . . . . . . . . . . . . . . . . . 5.5 Windows-specific information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.1 Configuring Windows Server 2003, 2008, 2008 R2 hosts . . . . . . . . . . . . . . . . . 5.5.2 Configuring Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.3 Hardware lists, device driver, HBAs, and firmware levels. . . . . . . . . . . . . . . . . . 5.5.4 Host adapter installation and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.5 Changing the disk timeout on Microsoft Windows Server. . . . . . . . . . . . . . . . . . 5.5.6 Installing the SDDDSM multipath-driver on Windows . . . . . . . . . . . . . . . . . . . . . 5.5.7 Attaching SVC volumes to Windows Server 2008 R2. . . . . . . . . . . . . . . . . . . . . 5.5.8 Extending a Windows Server 2008 (R2) volume . . . . . . . . . . . . . . . . . . . . . . . . 5.5.9 Removing a disk on Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Using the SVC CLI from a Windows host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Microsoft Volume Shadow Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.1 Installation overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.2 System requirements for the IBM System Storage hardware provider . . . . . . . . 5.7.3 Installing the IBM System Storage hardware provider . . . . . . . . . . . . . . . . . . . . 5.7.4 Verifying the installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.5 Creating the free and reserved pools of volumes . . . . . . . . . . . . . . . . . . . . . . . . 5.7.6 Changing the configuration parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8 Specific Linux (on x86 / x86_64) information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.1 Configuring the Linux host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.2 Configuration information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.3 Disabling automatic Linux system updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.4 Setting queue depth with QLogic HBAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.5 Multipathing in Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.6 Creating and preparing the SDD volumes for use . . . . . . . . . . . . . . . . . . . . . . . 5.8.7 Using the operating system Device Mapper Multipath (DM-MPIO) . . . . . . . . . . 5.8.8 Creating and preparing DM-MPIO volumes for use . . . . . . . . . . . . . . . . . . . . . . 5.9 VMware configuration information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.1 Configuring VMware hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.2 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . . 5.9.3 HBAs for hosts running VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.4 VMware storage and zoning guidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.5 Setting the HBA timeout for failover in VMware . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.6 Multipathing in ESX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.7 Attaching VMware to volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.8 Volume naming in VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.9 Setting the Microsoft guest operating system timeout . . . . . . . . . . . . . . . . . . . . 5.9.10 Extending a VMFS volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.11 Removing a datastore from an ESX host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10 Sun Solaris support information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10.1 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . 5.10.2 SDD dynamic pathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11 Hewlett-Packard UNIX configuration information . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.1 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . 5.11.2 Multipath solutions supported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.3 Coexistence of SDD and PV Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.4 Using an SVC volume as a cluster lock disk. . . . . . . . . . . . . . . . . . . . . . . . . . . viii
IBM System Storage SAN Volume Controller V6.3
165 166 169 170 170 171 171 172 172 172 173 173 173 176 182 187 190 191 191 192 192 195 196 197 199 199 200 200 200 201 205 207 207 211 211 212 212 212 213 214 214 217 218 218 220 221 221 221 222 222 222 222 223
7933TOC.fm
5.11.5 Support for HP-UX with greater than eight LUNs . . . . . . . . . . . . . . . . . . . . . . . 5.12 Using SDDDSM, SDDPCM, and SDD web interface . . . . . . . . . . . . . . . . . . . . . . . . 5.13 Calculating the queue depth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.14 Further sources of information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.14.1 Publications containing SVC storage subsystem attachment guidelines . . . . . Chapter 6. Data migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Migration overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Migration operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Migrating multiple extents (within a storage pool) . . . . . . . . . . . . . . . . . . . . . . . . 6.2.2 Migrating extents off an MDisk that is being deleted. . . . . . . . . . . . . . . . . . . . . . 6.2.3 Migrating a volume between storage pools. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.4 Migrating the volume to image mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.5 Migrating a volume between I/O Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.6 Monitoring the migration progress. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Functional overview of migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 Parallelism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.2 Error handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.3 Migration algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Migrating data from an image mode volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1 Image mode volume migration concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.2 Migration tips. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Data migration for Windows using the SVC GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.1 Windows Server 2008 host system connected directly to the LSI 3500 . . . . . . . 6.5.2 Adding the SVC between the host system and the LSI 3500 . . . . . . . . . . . . . . . 6.5.3 Importing the migrated disks into an online Windows Server 2008 host. . . . . . . 6.5.4 Adding the SVC between the host and LSI3500 using the CLI . . . . . . . . . . . . . 6.5.5 Migrating a volume from managed mode to image mode. . . . . . . . . . . . . . . . . . 6.5.6 Migrating the volume from image mode to image mode . . . . . . . . . . . . . . . . . . . 6.5.7 Removing image mode data from the SVC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.8 Map the free disks onto the Windows Server 2008. . . . . . . . . . . . . . . . . . . . . . . 6.6 Migrating Linux SAN disks to SVC disks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.1 Connecting the SVC to your SAN fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.2 Preparing your SVC to virtualize disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.3 Moving the LUNs to the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.4 Migrating the image mode volumes to managed MDisks . . . . . . . . . . . . . . . . . . 6.6.5 Preparing to migrate from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.6 Migrating the volumes to image mode volumes . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.7 Removing the LUNs from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7 Migrating ESX SAN disks to SVC disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.1 Connecting the SVC to your SAN fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.2 Preparing your SVC to virtualize disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.3 Moving the LUNs to the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.4 Migrating the image mode volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.5 Preparing to migrate from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.6 Migrating the managed volumes to image mode volumes . . . . . . . . . . . . . . . . . 6.7.7 Removing the LUNs from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8 Migrating AIX SAN disks to SVC volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8.1 Connecting the SVC to your SAN fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8.2 Preparing your SVC to virtualize disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8.3 Moving the LUNs to the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8.4 Migrating image mode volumes to volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8.5 Preparing to migrate from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
223 223 224 225 225 227 228 228 228 229 229 230 231 232 232 232 233 233 235 235 237 237 238 241 257 260 263 268 278 281 283 285 286 290 293 296 299 300 303 304 306 309 312 315 317 318 321 323 324 329 331 333
Contents
ix
7933TOC.fm
6.8.6 Migrating the managed volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8.7 Removing the LUNs from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.9 Using SVC for storage migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.10 Using volume mirroring and thin-provisioned volumes together . . . . . . . . . . . . . . . . 6.10.1 Zero detect feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.10.2 Volume mirroring with thin-provisioned volumes. . . . . . . . . . . . . . . . . . . . . . . . Chapter 7. Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Overview of Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Easy Tier concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 SSD arrays and MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.2 Disk tiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.3 Single tier storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.4 Multiple tier storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.5 Easy Tier process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.6 Easy Tier operating modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.7 Easy Tier activation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Easy Tier implementation considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.2 Implementation rules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.3 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Measuring and activating Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.1 Measuring by using the Storage Advisor Tool . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 SSD implementation and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.1 Mirrored configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.2 Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.3 Striped. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6 Using Easy Tier with the SVC CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.1 Initial cluster status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.2 Turning on Easy Tier evaluation mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.3 Creating a multitier storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.4 Setting the disk tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.5 Checking a volumes Easy Tier mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.6 Final cluster status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7 Using Easy Tier with the SVC GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.1 Setting the disk tier on MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.2 Checking Easy Tier status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 8. Advanced Copy Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.1 Business Requirements for FlashCopy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.2 Backup Improvements with FlashCopy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.3 Restore with FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.4 Moving and migrating data with FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.5 Application testing with FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.6 Host and Application considerations to ensure FlashCopy integrity . . . . . . . . . . 8.1.7 FlashCopy attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Reverse FlashCopy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.1 FlashCopy and Tivoli Storage FlashCopy Manager . . . . . . . . . . . . . . . . . . . . . . 8.3 FlashCopy functional overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Implementing SVC FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.1 FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.2 Multiple Target FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
336 337 340 341 341 343 349 350 350 350 351 351 351 352 353 354 355 355 355 356 356 357 359 360 362 363 365 365 365 367 368 368 369 369 370 372 373 374 374 374 375 375 375 376 376 377 378 381 381 382 382
7933TOC.fm
8.4.3 Consistency Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.4 FlashCopy indirection layer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.5 Grains and the FlashCopy bitmap. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.6 Interaction and dependency between Multiple Target FlashCopy mappings . . . 8.4.7 Summary of the FlashCopy indirection layer algorithm. . . . . . . . . . . . . . . . . . . . 8.4.8 Interaction with the cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.9 FlashCopy and image mode volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.10 FlashCopy mapping events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.11 FlashCopy mapping states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.12 Thin-provisioned FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.13 Background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.14 Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.15 Serialization of I/O by FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.16 Event handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.17 Asynchronous notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.18 Interoperation with Metro Mirror and Global Mirror . . . . . . . . . . . . . . . . . . . . . . 8.4.19 FlashCopy presets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5 Volume Mirroring and migration options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6 Metro Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.1 Metro Mirror overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.2 Remote copy techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.3 Metro Mirror features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.4 Multiple Cluster Mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.5 Importance of write ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.6 Remote copy intercluster communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.7 Metro Mirror attributes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.8 Methods of synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.9 Metro Mirror states and events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.10 Practical use of Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.11 Valid combinations of FlashCopy, Metro Mirror, and Global Mirror . . . . . . . . . 8.6.12 Metro Mirror configuration limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7 Metro Mirror commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.1 Listing available SVC cluster partners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.2 Creating the SVC cluster partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.3 Creating a Metro Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.4 Creating a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.5 Changing a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.6 Changing a Metro Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.7 Starting a Metro Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.8 Stopping a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.9 Starting a Metro Mirror Consistency Group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.10 Stopping a Metro Mirror Consistency Group. . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.11 Deleting a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.12 Deleting a Metro Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.13 Reversing a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.14 Reversing a Metro Mirror Consistency Group. . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.15 Background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8 Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8.1 Intracluster Global Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8.2 Intercluster Global Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8.3 Asynchronous remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8.4 SVC Global Mirror features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8.5 Global Mirror relationship between master and auxiliary volumes . . . . . . . . . . .
Contents
383 385 386 387 389 389 390 391 393 395 396 397 397 397 398 399 399 400 410 410 411 412 413 416 418 419 419 420 427 428 428 428 429 429 430 430 431 431 432 432 433 433 433 434 434 434 434 435 435 435 435 436 438 xi
7933TOC.fm
8.8.6 Using Change Volumes with Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8.7 Importance of write ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8.8 Global Mirror Consistency Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8.9 Distribution of work among nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8.10 Background copy performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8.11 Thin-provisioned background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9 Global Mirror process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.1 Methods of synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.2 Global Mirror states and events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.3 Practical use of Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.4 Global Mirror configuration limits. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.10 Global Mirror commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.10.1 Listing the available SVC cluster partners . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.10.2 Creating an SVC cluster partnership. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.10.3 Creating a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . 8.10.4 Creating a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.10.5 Changing a Global Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.10.6 Changing a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . 8.10.7 Starting a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.10.8 Stopping a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.10.9 Starting a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.10.10 Stopping a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . 8.10.11 Deleting a Global Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.10.12 Deleting a Global Mirror Consistency Group. . . . . . . . . . . . . . . . . . . . . . . . . . 8.10.13 Reversing a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.10.14 Reversing a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . 8.11 Troubleshooting Remote Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.11.1 1920 error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.11.2 1720 error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 9. SAN Volume Controller operations using the command-line interface. . 9.1 Normal operations using CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.1 Command syntax and online help. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Working with managed disks and disk controller systems . . . . . . . . . . . . . . . . . . . . . 9.2.1 Viewing disk controller details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.2 Renaming a controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.3 Discovery status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.4 Discovering MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.5 Viewing MDisk information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.6 Renaming an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.7 Including an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.8 Adding MDisks to a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.9 Showing MDisks in a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.10 Working with a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.11 Creating a storage pool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.12 Viewing storage pool information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.13 Renaming a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.14 Deleting a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.15 Removing MDisks from a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Working with hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.1 Creating a Fibre Channel-attached host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.2 Creating an iSCSI-attached host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.3 Modifying a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
439 442 442 444 444 445 445 445 446 454 455 455 456 459 460 460 460 461 461 461 462 462 462 463 463 463 464 464 466 467 468 468 470 470 471 471 471 473 474 474 476 476 476 476 478 479 479 480 480 480 481 483
xii
7933TOC.fm
9.3.4 Deleting a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.5 Adding ports to a defined host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.6 Deleting ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 Working with the Ethernet port for iscsi. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5 Working with volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.1 Creating a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.2 Volume information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.3 Creating a thin-provisioned volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.4 Creating a volume in image mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.5 Adding a mirrored volume copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.6 Splitting a mirrored volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.7 Modifying a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.8 I/O governing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.9 Deleting a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.10 Expanding a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.11 Assigning a volume to a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.12 Showing volumes to host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.13 Deleting a volume to host mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.14 Migrating a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.15 Migrating a fully managed volume to an image mode volume . . . . . . . . . . . . . 9.5.16 Shrinking a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.17 Showing a volume on an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.18 Showing which volumes are using a storage pool . . . . . . . . . . . . . . . . . . . . . . 9.5.19 Showing which MDisks are used by a specific volume . . . . . . . . . . . . . . . . . . . 9.5.20 Showing from which storage pool a volume has its extents . . . . . . . . . . . . . . . 9.5.21 Showing the host to which the volume is mapped . . . . . . . . . . . . . . . . . . . . . . 9.5.22 Showing the volume to which the host is mapped . . . . . . . . . . . . . . . . . . . . . . 9.5.23 Tracing a volume from a host back to its physical disk . . . . . . . . . . . . . . . . . . . 9.6 Scripting under the CLI for SVC task automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6.1 Scripting structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7 SVC advanced operations using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7.1 Command syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7.2 Organizing on window content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8 Managing the clustered system using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.1 Viewing clustered system properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.2 Changing system settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.3 iSCSI configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.4 Modifying IP addresses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.5 Supported IP address formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.6 Setting the clustered system time zone and time . . . . . . . . . . . . . . . . . . . . . . . . 9.8.7 Starting statistics collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.8 Determining the status of a copy operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.9 Shutting down a clustered system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.9 Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.9.1 Viewing node details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.9.2 Adding a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.9.3 Renaming a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.9.4 Deleting a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.9.5 Shutting down a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.10 I/O Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.10.1 Viewing I/O Group details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.10.2 Renaming an I/O Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.10.3 Adding and removing hostiogrp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents
484 484 485 486 487 487 489 491 491 492 496 497 498 500 500 501 503 503 503 504 505 506 506 507 507 508 508 509 511 511 515 515 515 518 518 520 520 521 522 522 524 524 524 526 526 527 528 528 529 531 531 531 531 xiii
7933TOC.fm
9.10.4 Listing I/O Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.11 Managing authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.11.1 Managing users using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.11.2 Managing user roles and groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.11.3 Changing a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.11.4 Audit log command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12 Managing Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.1 FlashCopy operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.2 Setting up FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.3 Creating a FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.4 Creating a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.5 Preparing (pre-triggering) the FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . 9.12.6 Preparing (pre-triggering) the FlashCopy Consistency Group . . . . . . . . . . . . . 9.12.7 Starting (triggering) FlashCopy mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.8 Starting (triggering) FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . 9.12.9 Monitoring the FlashCopy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.10 Stopping the FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.11 Stopping the FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.12 Deleting the FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.13 Deleting the FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.14 Migrating a volume to a thin-provisioned volume . . . . . . . . . . . . . . . . . . . . . . 9.12.15 Reverse FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.16 Split-stopping of FlashCopy maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.13 Metro Mirror operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.13.1 Setting up Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.13.2 Creating an SVC partnership between ITSO_SVC1 and ITSO_SVC4 . . . . . . . 9.13.3 Creating a Metro Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.13.4 Creating the Metro Mirror relationships. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.13.5 Creating a stand-alone Metro Mirror relationship for MM_App_Pri. . . . . . . . . . 9.13.6 Starting Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.13.7 Starting a Metro Mirror Consistency Group. . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.13.8 Monitoring the background copy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.13.9 Stopping and restarting Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.13.10 Stopping a stand-alone Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . 9.13.11 Stopping a Metro Mirror Consistency Group. . . . . . . . . . . . . . . . . . . . . . . . . . 9.13.12 Restarting a Metro Mirror relationship in the Idling state. . . . . . . . . . . . . . . . . 9.13.13 Restarting a Metro Mirror Consistency Group in the Idling state . . . . . . . . . . 9.13.14 Changing copy direction for Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.13.15 Switching copy direction for a Metro Mirror relationship . . . . . . . . . . . . . . . . . 9.13.16 Switching copy direction for a Metro Mirror Consistency Group . . . . . . . . . . . 9.13.17 Creating an SVC partnership among many clustered systems. . . . . . . . . . . . 9.13.18 Star configuration partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.14 Global Mirror operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.14.1 Setting up Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.14.2 Creating an SVC partnership between ITSO_SVC1 and ITSO_SVC4 . . . . . . . 9.14.3 Changing link tolerance and system delay simulation . . . . . . . . . . . . . . . . . . . 9.14.4 Creating a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . 9.14.5 Creating Global Mirror relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.14.6 Creating the stand-alone Global Mirror relationship for GM_App_Pri. . . . . . . . 9.14.7 Starting Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.14.8 Starting a stand-alone Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . 9.14.9 Starting a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.14.10 Monitoring background copy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
IBM System Storage SAN Volume Controller V6.3
532 534 534 535 536 536 538 538 539 539 540 541 542 543 544 545 545 546 547 548 548 552 554 555 556 557 560 560 561 563 563 564 566 566 566 567 568 569 569 570 571 572 579 580 580 582 584 584 585 585 586 586 587
7933TOC.fm
9.14.11 Stopping and restarting Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589 9.14.12 Stopping a stand-alone Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . 589 9.14.13 Stopping a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . 590 9.14.14 Restarting a Global Mirror relationship in the Idling state . . . . . . . . . . . . . . . . 591 9.14.15 Restarting a Global Mirror Consistency Group in the Idling state . . . . . . . . . . 591 9.14.16 Changing direction for Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592 9.14.17 Switching copy direction for a Global Mirror relationship . . . . . . . . . . . . . . . . 592 9.14.18 Switching copy direction for a Global Mirror Consistency Group . . . . . . . . . . 593 9.14.19 Changing a GM relationship to cycling mode . . . . . . . . . . . . . . . . . . . . . . . . . 595 9.14.20 Create thin provisioned change volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596 9.14.21 Stop standalone remote copy relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . 597 9.14.22 Set cycling mode on standalone remote copy relationship . . . . . . . . . . . . . . . 597 9.14.23 Set change volume on master volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 598 9.14.24 Set change volume on auxiliary volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 598 9.14.25 Start standalone relationship in cycling mode. . . . . . . . . . . . . . . . . . . . . . . . . 599 9.14.26 Stop Consistency Group to change the cycling mode . . . . . . . . . . . . . . . . . . 600 9.14.27 Set cycling mode on Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601 9.14.28 Set change volume on master volume relationships of the Consistency Group . . 601 9.14.29 Set change volume on auxiliary volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603 9.14.30 Start Consistency Group CG_W2K3_GM in cycling mode . . . . . . . . . . . . . . . 604 9.15 Service and maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 605 9.15.1 Upgrading software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 605 9.15.2 Running maintenance procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 610 9.15.3 Setting up SNMP notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613 9.15.4 Set syslog event notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613 9.15.5 Configuring error notification using an email server . . . . . . . . . . . . . . . . . . . . . 614 9.15.6 Analyzing the event log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 614 9.15.7 License settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616 9.15.8 Listing dumps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 617 9.16 Backing up the SVC system configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621 9.16.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621 9.17 Restoring the SVC clustered system configuration . . . . . . . . . . . . . . . . . . . . . . . . . . 622 9.17.1 Deleting configuration backup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623 9.18 Working with the SVC Quorum MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 624 9.18.1 Listing the SVC Quorum MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 624 9.18.2 Changing the SVC Quorum Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 624 9.19 Working with the Service Assistant menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 626 9.19.1 SVC CLI Service Assistant menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 626 9.20 SAN troubleshooting and data collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 627 9.21 T3 recovery process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 629 Chapter 10. SAN Volume Controller operations using the GUI. . . . . . . . . . . . . . . . . . 10.1 SVC normal operations using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.1 Introduction to SVC normal operations using the GUI . . . . . . . . . . . . . . . . . . . 10.1.2 Organizing on window content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.3 Help. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Working with External Disk Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.1 Viewing Disk Controller details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.2 Renaming a disk controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.3 Discovering MDisks from the External panel . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Working with Storage Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.1 Viewing Storage Pool information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 631 632 632 636 641 641 641 642 643 643 644
Contents
xv
7933TOC.fm
10.3.2 Discovering MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.3 Creating Storage Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.4 Renaming a Storage Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.5 Deleting a Storage Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.6 Adding or removing MDisks from a Storage Pool . . . . . . . . . . . . . . . . . . . . . . . 10.3.7 Showing the volumes that are associated with a Storage Pool . . . . . . . . . . . . 10.4 Working with managed disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.1 MDisk information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.2 Renaming an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.3 Discovering MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.4 Adding MDisks to a Storage Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.5 Removing MDisks from a Storage Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.6 Including an excluded MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.7 Activating EasyTier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5 Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6 Working with hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.1 Host information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.2 Creating a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.3 Renaming a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.4 Modifying a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.5 Deleting a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.6 Adding ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.7 Deleting ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.8 Creating or modifying the host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.9 Deleting a host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.10 Deleting all host mappings for a given host . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7 Working with volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.1 Volume information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.2 Creating a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.3 Renaming a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.4 Modifying a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.5 Modifying thin-provisioning volume properties . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.6 Deleting a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.7 Creating or modifying the host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.8 Deleting a host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.9 Deleting all host mappings for a given volume . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.10 Shrinking a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.11 Expanding a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.12 Shrinking the real capacity of a thin-provisioned volume . . . . . . . . . . . . . . . . 10.7.13 Expanding the real capacity of a thin provisioned volume . . . . . . . . . . . . . . . 10.7.14 Migrating a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.15 Adding a mirrored copy to an existing volume . . . . . . . . . . . . . . . . . . . . . . . . 10.7.16 Deleting a mirrored copy from a volume mirror. . . . . . . . . . . . . . . . . . . . . . . . 10.7.17 Splitting a volume copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.18 Validating volume copies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.19 Migrating to a thin-provisioned volume using volume mirroring . . . . . . . . . . . 10.7.20 Creating a volume in image mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.21 Migrating a volume to an image mode volume . . . . . . . . . . . . . . . . . . . . . . . . 10.7.22 Creating an image mode mirrored volume . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8 Copy Services: managing FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.1 Creating a FlashCopy Mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.2 Creating and starting a snapshot preset with a single click . . . . . . . . . . . . . . . 10.8.3 Creating and starting a clone preset with a single click . . . . . . . . . . . . . . . . . . xvi
IBM System Storage SAN Volume Controller V6.3
645 645 648 649 650 650 650 650 652 653 654 655 656 657 659 659 661 663 668 669 670 671 674 676 678 678 679 681 684 691 692 694 697 698 700 704 705 707 709 712 713 716 719 720 722 723 726 726 726 726 728 739 741
7933TOC.fm
10.8.4 Creating and starting a backup preset with a single click . . . . . . . . . . . . . . . . . 10.8.5 Creating a FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.6 Creating FlashCopy mappings in a Consistency Group . . . . . . . . . . . . . . . . . . 10.8.7 Show Dependent Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.8 Moving a FlashCopy mapping to a Consistency Group . . . . . . . . . . . . . . . . . . 10.8.9 Removing a FlashCopy mapping from a Consistency Group . . . . . . . . . . . . . . 10.8.10 Modifying a FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.11 Renaming a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.12 Renaming a Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.13 Deleting a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.14 Deleting a FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.15 Starting FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.16 Starting a FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.17 Stopping the FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.18 Stopping the FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.19 Migrating between a fully allocated volume and a Space-Efficient volume. . . 10.8.20 Reversing and splitting a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . 10.9 Copy Services: managing Remote Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.9.1 Cluster partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.9.2 Creating the SVC partnership between two remote SVC Clusters . . . . . . . . . . 10.9.3 Creating stand-alone remote copy relationships. . . . . . . . . . . . . . . . . . . . . . . . 10.9.4 Creating a Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.9.5 Renaming a Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.9.6 Renaming a Remote Copy relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.9.7 Moving a stand-alone Remote Copy relationship to a Consistency Group. . . . 10.9.8 Removing Remote Copy relationship from a Consistency Group. . . . . . . . . . . 10.9.9 Starting a Remote Copy relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.9.10 Starting a Remote Copy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . 10.9.11 Switching the copy direction for a Remote Copy relationship . . . . . . . . . . . . . 10.9.12 Switching the copy direction for a Consistency Group . . . . . . . . . . . . . . . . . . 10.9.13 Stopping a Remote Copy relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.9.14 Stopping a Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.9.15 Deleting stand-alone Remote Copy relationships . . . . . . . . . . . . . . . . . . . . . . 10.9.16 Deleting a Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.10 Managing the cluster using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.10.1 System Status information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.10.2 View I/O groups and their associated nodes. . . . . . . . . . . . . . . . . . . . . . . . . . 10.10.3 View cluster properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.10.4 Renaming an SVC cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.10.5 Shutting down a cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.10.6 Upgrading software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.11 Managing I/O Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.11.1 View I/O group properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.11.2 Modifying I/O group properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.12 Managing nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.12.1 View node properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.12.2 Renaming a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.12.3 Adding a node to the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.12.4 Removing a node from the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.13 Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.13.1 Monitoring panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.13.2 Event Log panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.13.3 Run fix procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents
743 745 747 751 752 753 754 755 756 757 758 759 760 761 763 763 764 764 766 768 770 773 778 778 779 780 781 783 784 786 787 788 790 790 791 791 793 793 794 795 797 798 798 799 801 801 802 804 805 807 807 810 817 xvii
7933TOC.fm
10.13.4 Support panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.14 User Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.14.1 Creating a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.14.2 Modifying user properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.14.3 Removing a user password. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.14.4 Removing a user SSH Public Key. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.14.5 Deleting a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.14.6 Creating a user group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.14.7 Modifying user group properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.14.8 Deleting a user group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.14.9 Audit log information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.1 Configuring the Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.2 Configuring the Service IP addresses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.3 iSCSI configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.4 Fibre Channel information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.5 Event notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.6 Email notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.7 SNMP notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.8 Using the General panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.9 Date and Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.10 Licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.11 Upgrading software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.12 Setting GUI Preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.16 Upgrading SVC software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.16.1 Precautions before upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.16.2 SVC software upgrade test utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.16.3 Upgrade procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17 Service Assistant with the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.1 Placing an SVC node into Service State. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.2 Exiting an SVC node from Service State . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.3 Rebooting an SVC node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.4 Collect Logs page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.5 Manage Cluster page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.6 Recover Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.7 Reinstall software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.8 Upgrade Manually . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.9 Modify WWNN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.10 Change Service IP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.11 Configure CLI access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.12 Restart Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix A. Performance data and statistics gathering. . . . . . . . . . . . . . . . . . . . . . . SVC performance overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SVC performance perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Performance monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Collecting performance statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Real-Time Performance Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Performance data collection and Tivoli Storage Productivity Center for Disk . . . . . . . .
819 824 826 827 829 830 831 832 834 836 837 841 841 843 844 846 847 847 849 852 852 853 854 854 855 856 856 857 863 866 868 870 871 872 873 873 874 875 875 876 876 879 880 880 881 881 881 884 889
xviii
7933TOC.fm
Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Split I/O Group overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . No ISL Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ISL Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Diagnosis and recovery planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Diagnosis guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Diagnosis Guidelines for NO ISL configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Diagnosis Guidelines for ISL configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Recovery guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . What do you need to supply to recover the Split I/O Group configuration . . . . . . . . . . Recovery Guidelines for No ISL configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Recovery Guidelines for ISL configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
899 900 901 901 906 909 916 916 934 935 936 937 948 951 951 951 952 953
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 955
Contents
xix
7933TOC.fm
xx
7933spec.fm
Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.
xxi
7933spec.fm
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. These and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol ( or ), indicating US registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:
AIX 5L AIX DB2 developerWorks DS4000 DS8000 FlashCopy GPFS IBM Systems Director Active Energy Manager IBM Power Systems Redbooks Redbooks (logo) System p System Storage DS System Storage System x Tivoli TotalStorage WebSphere XIV
The following terms are trademarks of other companies: Intel Xeon, Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates. UNIX is a registered trademark of The Open Group in the United States and other countries. Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others.
xxii
7933chang.fm
Summary of changes
This section describes the technical changes made in this edition of the book and in previous editions. This edition might also include minor corrections and editorial changes that are not identified. Summary of Changes for SG24-7933-01 for IBM System Storage SAN Volume Controller V6.3 as created or updated on January 17, 2012.
New information
Split cluster I/O Groups
Changed information
Screen captures all at 6.3 level
xxiii
7933chang.fm
xxiv
7933pref.fm
Preface
This IBM Redbooks publication is a detailed technical guide to the IBM System Storage SAN Volume Controller (SVC) Version 6.3.0. SAN Volume Controller is a virtualization appliance solution which maps virtualized volumes that are visible to hosts and applications to physical volumes on storage devices. Each server within the storage area network (SAN) has its own set of virtual storage addresses that are mapped to physical addresses. If the physical addresses change, the server continues running using the same virtual addresses that it had before. Therefore, volumes or storage can be added or moved while the server is still running. The IBM virtualization technology improves the management of information at the block level in a network, thus enabling applications and servers to share storage devices on a network. This book is intended for readers who need to implement the SVC at a 6.3.0 release level with a minimum of effort.
xxv
7933pref.fm
till 2005 he was the client representative for IBM's Internal Client platforms in Denmark. Torben started to work in the SAN/DISK for open systems department in March 2005. Torben provides daily and ongoing support as well as working with SAN designs and solutions for customers. Massimo Rosati is a Certified ITS Senior Storage and SAN Software Specialist at IBM Italy. He has 26 years of experience in the delivery of Professional Services and SW Support. His areas of expertise include storage hardware, Storage Area Networks, storage virtualization, disaster recovery and business continuity solutions. He has written other IBM Redbooks on storage virtualization products. Christian Schroeder is a Storage and SAN support specialist at the Technical Support and Competence Center (TSCC) in IBM Germany, and he has been with IBM since 1999. Before he joined the TSCC for IBM Systems Storage he used to work as a support specialist for IBM System x servers and provided EMEA Level 2 support for IBM BladeCenter solutions. Figure 1 shows the authors (Mark Chitti not pictured).
This book was produced by a team of specialists from around the world working at Brocade Communications Systems, San Jose, and the International Technical Support Organization, San Jose Center. We extend our thanks to the following people for their contributions to this project, including the development and PFE teams in Hursley. In particular, we thank the previous authors of versions of this book: Matt Amanat Pall Beck Angelo Bernasconi Alexandre Chabrol Steve Cody Sean Crawford Peter Crowhurst Sameer Dhulekar Werner Eggli Frank Enders Katja Gebuhr Deon George Amarnath Hiriyannappa xxvi
IBM System Storage SAN Volume Controller V6.3
7933pref.fm
Thorsten Hoss Juerg Hossli Philippe Jachimczyk Kamalakkannan J Jayaraman Dan Koeck Bent Lerager Ian MacQuarrie Craig McKenna Andy McManus Joao Marcos Leite Barry Mellish Suad Musovich Massimo Rosati Fred Scholten Robert Symons Marcus Thordal Xiao Peng Zhao Thanks also to the following people for their contributions to previous editions, and to those who contributed to this edition: Chris Canto Peter Eccles Huw Francis Carlos Fuente Alex Howell Colin Jewell Neil Kirkland Geoff Lane Andrew Martin Paul Merrison Evelyn Perez Steve Randle Lucy Harris (nee Raw) Greg Shepherd Bill Scales Matt Smith Barry Whyte Muhammad Zubair IBM Hursley Marc Bruni IBM Houston Larry Chiu Paul Muench IBM Almaden Bill Wiegand IBM Advanced Technical Support Sharon Wang IBM Chicago Chris Saul IBM San Jose
Preface
xxvii
7933pref.fm
Tina Sampson IBM Tucson Sangam Racherla IBM ITSO Special thanks to the Brocade staff for their unparalleled support of this residency in terms of equipment and support in many areas: Jim Baldyga Mansi Botadra Yong Choi Silviano Gaona Brian Steffler Marcus Thordal Steven Tong Brocade Communications Systems
Comments welcome
Your comments are important to us! We want our books to be as helpful as possible. Send us your comments about this book or other IBM Redbooks publications in one of the following ways: Use the online Contact us review Redbooks form found at: ibm.com/redbooks Send your comments in an email to: redbooks@us.ibm.com Mail your comments to: IBM Corporation, International Technical Support Organization Dept. HYTD Mail Station P099 2455 South Road Poughkeepsie, NY 12601-5400
xxviii
7933pref.fm
Preface
xxix
7933pref.fm
xxx
Chapter 1.
The key concept of virtualization is to decouple the storage from the storage functions required in todays storage area network (SAN) environment.
Decoupling means abstracting the physical location of data from the logical representation of the data. The virtualization engine presents logical entities to the user and internally manages the process of mapping these entities to the actual location of the physical storage.
The actual mapping performed is dependent upon the specific implementation, as is the granularity of the mapping, which can range from a small fraction of a physical disk, up to the full capacity of a physical disk. A single block of information in this environment is identified by its logical unit number (LUN), which is the physical disk, and an offset within that LUN, which is known as a logical block address (LBA). Note that the term physical disk is used in this context to describe a piece of storage that might be carved out of a RAID array in the underlying disk subsystem. Specific to the SVC implementation, the address space that is mapped between the logical entity is referred to as volume, and the physical disk is referred to as managed disks (MDisks). Figure 1-2 on page 4 shows an overview of block-level virtualization.
The server and application are only aware of the logical entities, and access these entities using a consistent interface that is provided by the virtualization layer. The functionality of a volume that is presented to a server, such as expanding or reducing the size of a volume, mirroring a volume, creating a FlashCopy, thin provisioning, and so on, is implemented in the virtualization layer. It does not rely in any way on the functionality that is provided by the underlying disk subsystem. Data that is stored in a virtualized environment is stored in a location-independent way, which allows a user to move or migrate data between physical locations, referred to as storage pools. We refer to block-level storage virtualizations as the cornerstones of virtualization. These are the core benefits that a product such as the SVC can provide over traditional directly attached or SAN storage. The SVC provides the following benefits: The SVC provides online volume migration while applications are running, which is possibly the greatest single benefit for storage virtualization. This capability allows data to be migrated on and between the underlying storage subsystems without any impact to the servers and applications. In fact, this migration is performed without the knowledge by servers and applications that it even occurred. The SVC simplifies storage management by providing a single image for multiple controllers and a consistent user interface for provisioning heterogeneous storage. The SVC provides enterprise-level copy services functions. Performing the copy services functions within the SVC removes dependencies on the storage subsystems, thereby enabling the source and target copies to be on other storage subsystem types. Storage utilization can be increased by pooling storage across the SAN. System performance is often improved with SVC as a result of volume striping across multiple arrays or controllers and the additional cache it provides. The SVC delivers these functions in a homogeneous way on a scalable and highly available platform, over any attached storage, and to any attached server. 4
IBM System Storage SAN Volume Controller V6.3
1.4 Summary
Storage virtualization is no longer merely a concept or an unproven technology. All major storage vendors offer storage virtualization products. Making use of storage virtualization as the foundation for a flexible and reliable storage solution helps enterprises to better align business and IT by optimizing the storage infrastructure and storage management to meet business demands. The IBM System Storage SAN Volume Controller is a mature, sixth-generation virtualization solution that uses open standards and is consistent with the Storage Networking Industry Association (SNIA) storage model. The SVC is an appliance-based in-band block 6
IBM System Storage SAN Volume Controller V6.3
virtualization process, in which intelligence, including advanced storage functions, is migrated from individual storage devices to the storage network. The IBM System Storage SAN Volume Controller can improve the utilization of your storage resources, simplify your storage management, and improve the availability of your applications.
Chapter 2.
10
There are two major approaches in use today to be considered for the implementation of block-level aggregation and virtualization: Symmetric: in-band appliance The device is a SAN appliance that sits in the data path, and all I/O flows through the device. This kind of implementation is also referred to as symmetric virtualization or in-band. The device is both target and initiator. It is the target of I/O requests from the host perspective, and the initiator of I/O requests from the storage perspective. The redirection is performed by issuing new I/O requests to the storage. The SVC uses symmetric virtualization. Asymmetric: out-of-band or controller-based The device is usually a storage controller that provides an internal switch for external storage attachment. In this approach, the storage controller intercepts and redirects I/O requests to the external storage as it does for internal storage. The actual I/O requests are themselves redirected. This kind of implementation is also referred to as asymmetric virtualization or out-of-band. Figure 2-1 shows variations of the two virtualization approaches.
Although these approaches provide essentially the same cornerstones of virtualization, there can be interesting side effects, as discussed here.
11
The controller-based approach has high functionality, but it fails in terms of scalability or upgradability. Because of the nature of its design, there is no true decoupling with this approach, which becomes an issue for the life cycle of this solution, such as a controller. You will be challenged with data migration issues and questions, such as how to reconnect the servers to the new controller, and how to reconnect them online without any impact to your applications. Be aware that with this approach, you not only replace a controller but also implicitly replace your entire virtualization solution. In addition to replacing the hardware, can also be necessary to update or repurchase the licenses for the virtualization feature, advanced copy functions, and so on. With a SAN or fabric-based appliance solution that is based on a scale-out cluster architecture, life cycle management tasks such as adding or replacing new disk subsystems or migrating data between them, are extremely simple. Servers and applications remain online, data migration takes place transparently on the virtualization platform, and licenses for virtualization and copy services require no update; that is, they require no additional costs when disk subsystems are replaced. Only the fabric-based appliance solution provides an independent and scalable virtualization platform that can provide enterprise-class copy services; is open for future interfaces and protocols; allows you to choose the disk subsystems that best fit your requirements; and does not lock you into specific SAN hardware. For these reasons, IBM has chosen the SAN or fabric-based appliance approach for the implementation of the IBM System Storage SAN Volume Controller (SVC). The SVC possesses the following key characteristics: It is highly scalable, providing an easy growth path to two-n nodes (grow in a pair of nodes). It is SAN interface-independent. It actually supports FC and iSCSI, but is also open for future enhancements. It is host-independent, for fixed block-based Open Systems environments. It is external storage RAID controller-independent, providing a continual and ongoing process to qualify additional types of controllers. It is able to utilize disks internally located within the nodes (solid state disks). It is able to utilize disks locally attached to the nodes (SAS drives). On the SAN storage that is provided by the disk subsystems, the SVC can offer the following services. It can create and manage a single pool of storage attached to the SAN. It can manage multiple tiers of storage. It provides block-level virtualization (logical unit virtualization). It provides automatic block-, or sub-LUN-, level data migration between storage tiers. It provides advanced functions to the entire SAN, such as: Large scalable cache Advanced Copy Services FlashCopy (point-in-time copy) Metro Mirror and Global Mirror (remote copy, synchronous/asynchronous)
This list of features will grow with each future release, because the layered architecture of the SVC can easily implement new storage features.
13
A clustered system of SVC nodes are connected to the same fabric and present logical disks or volumes to the hosts. These volumes are created from managed LUNs or MDisks that are presented by the RAID disk subsystems. There are two distinct zones shown in the fabric: A host zone, in which the hosts can see and address the SVC nodes A storage zone, in which the SVC nodes can see and address the MDisk/logical unit numbers (LUNs) presented by the RAID subsystems. Hosts are not permitted to operate on the RAID LUNs directly, and all data transfer happens through the SVC nodes. This design is commonly described as symmetric virtualization. For iSCSI-based access, using two networks and separating iSCSI traffic within the networks by using a dedicated virtual local area network (VLAN) path for storage traffic will prevent any IP interface, switch, or target port failure from compromising the host servers access to the volumes LUNs.
14
Description An occurrence of significance to a task or system. Events can include completion or failure of an operation, a user action, or the change in state of a process. The process of controlling which hosts have access to specific volumes within a system. A collection of storage capacity that provides the capacity requirements for a volume. The ability to define a storage unit (full system, storage pool, volume) with a logical capacity size that is larger than the physical capacity assigned to that storage unit. A discrete unit of storage on disk, tape, or other data recording medium that supports a form of identifier and parameter list, such as a volume label or input/output control.
host mapping
VDisk-to-host mapping
storage pool
space-efficient
volume
For a detailed glossary containing the terms and definitions used in the SAN Volume Controller see Appendix B, Terminology on page 891.
2.4.1 Nodes
Each SAN volume Controller hardware unit is called a node. The node provides the virtualization for a set of volumes, cache, and copy services functions. SVC nodes are deployed in pairs and multiple pairs make up a clustered system or system. A system can consist of between one and four SVC node pairs.
15
One of the nodes within the system will be known as the configuration node. The configuration node manages the configuration activity for the system. If this node fails, the system will choose a new node to become the configuration node. Because the nodes are installed in pairs, each node provides a failover function to its partner node in the event of a node failure.
2.4.3 System
The system or clustered system consists of between one and four I/O Groups. Certain configuration limitations are then set for the individual system. For example, the maximum number of volumes supported per system is 8192 (having a maximum of 2048 volumes per I/O Group), or the maximum managed disk supported is 32 PB per system. All configuration, monitoring, and service tasks are performed at the system level. Configuration settings are replicated to all nodes in the system. To facilitate these tasks, a management IP address is set for the system. A process is provided to back up the system configuration data onto disk so that it can be restored in the event of a disaster. Note that this method does not back up application data. Only SVC system configuration information is backed up. For the purposes of remote data mirroring, two or more systems must form a partnership prior to creating relationships between mirrored volumes.
16
For details about the Maximum Configurations applicable to the System, I/O Group and nodes, select the restrictions hot link in the section corresponding to your SVC code level: http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html
2.4.5 MDisks
The SVC system and its I/O Groups view the storage that is presented to the SAN by the back-end controllers as a number of disks or LUNs, known as managed disks or MDisks. Because the SVC does not attempt to provide recovery from physical disk failures within the back-end controllers, an MDisk is usually provisioned from a RAID array. The application servers, however, do not see the MDisks at all. Instead they see a number of logical disks, known as virtual disks or volumes, which are presented by the SVC I/O Groups through the SAN (FC) or LAN (iSCSI) to the servers. The MDisks are placed into storage pools where they are divided up into a number of extents, which can range in size from 16 MB to 8182 MB, as defined by the SVC administrator. A volume is host-accessible storage that has been provisioned out of one Storage Pool, or if it is a mirrored volume, out of two Storage Pools. The maximum size of an MDisk is 1 PB. An SVC system supports up to 4096 MDisks (including internal RAID arrays). At any point in time, an MDisk is in one of the following three modes: Unmanaged MDisk An MDisk is reported as unmanaged when it is not a member of any storage pool. An unmanaged MDisk is not associated with any volumes and has no metadata stored on it. SVC will not write to an MDisk that is in unmanaged mode, except when it attempts to change the mode of the MDisk to one of the other modes. SVC can see the resource, but it is not assigned to a storage pool. Managed MDisk Managed mode MDisks are always members of a storage pool, and they contribute extents to the storage pool. Volumes (if not operated in image mode) are created from these extents. MDisks operating in managed mode might have metadata extents allocated from them and can be used as quorum disks. This is the most common and normal mode of an MDisk.
17
Image mode MDisk Image mode provides a direct block-for-block translation from the MDisk to the volume by using virtualization. This mode is provided to satisfy three major usage scenarios: Image mode allows virtualization of MDisks already containing data that was written directly and not through an SVC; rather, it was created by a direct-connected host. This mode allows a client to insert the SVC into the data path of an existing storage volume or LUN with minimal downtime. Chapter 6, Data migration on page 227, provides details of the data migration process. Image mode allows a volume that is managed by the SVC to be used with the native copy services function provided by the underlying RAID controller. To avoid the loss of data integrity when the SVC is used in this way, it is important that you disable the SVC cache for the volume. SVC provides the ability to migrate to image mode, which allows the SVC to export volumes and access them directly from a host without the SVC in the path. Each MDisk presented from an external disk controller has an online path count that is the number of nodes having access to that MDisk. The maximum count is the maximum paths detected at any point in time by the system. The current count is what the system sees at this point in time. A current value less than the maximum can indicate that SAN fabric paths have been lost. See 2.5.1, Image mode volumes on page 23 for more details. Starting with SVC 6.1, internal SSD drives do not appear as MDisks. Internal SSDs will be used and appear as disk drives, and therefore additional RAID protection is required.
18
Figure 2-3 on page 19 illustrates the relationships of the SVC entities to each other.
Each MDisk in the storage pool is divided into a number of extents. The size of the extent will be selected by the administrator at the creation time of the storage pool and cannot be changed later. The size of the extent ranges from 16MB up to 8192MB. It is a best practice to use the same extent size for all storage pools in a system; this is a pre-requisite for supporting volume migration between two storage pools. If the storage pool extent sizes are not the same, then you must use volume mirroring (see 2.5.4, Mirrored volumes on page 26) to copy volumes between pools. SVC limits the number of extents in a system to 222 ~= 4 million. Because the number of addressable extents is limited, the total capacity of an SVC system depends on the extent size that is chosen by the SVC administrator. The capacity numbers that are specified in Table 2-2 for an SVC system assume that all defined storage pools have been created with the same extent size.
Table 2-2 Extent size-to-addressability matrix Extent size maximum 16 MB 32 MB 64 MB 128 MB 4096 MB System capacity 64 TB 128 TB 256 TB 512 TB 16 PB Extent size maximum 256 MB 512 MB 1024 MB 2048 MB 8192 MB System capacity 1 PB 2 PB 4 PB 8 PB 32 PB
19
For most systems, a capacity of 1 to 2 PB is sufficient. A best practice is to use 256 MB or, for larger clustered systems, 512 MB as the standard extent size.
2.4.9 Volumes
Volumes are logical disks presented to the host or application servers by the SVC. The hosts cannot see the MDisks; they can only see the logical volumes created from combining extents from a storage pool.
There are three types of volumes: striped, sequential, and image. These types are determined by the way in which the extents are allocated from the storage pool, as explained here: A volume created in striped mode has extents allocated from each MDisk in the storage pool in a round-robin fashion. With a sequential mode volume, extents are allocated sequentially from an MDisk. Image mode is a one-to-one mapped extent mode volume. Using striped mode is the best method to use for most cases. However, sequential extent allocation mode can slightly increase the sequential performance for certain workloads. Figure 2-4 on page 21 shows striped volume mode and sequential volume mode, and illustrates how the extent allocation from the storage pool differs.
20
You can allocate the extents for a volume in many ways. The process is under full user control at volume creation time and can be changed at any time by migrating single extents of a volume to another MDisk within the storage pool. Chapter 6, Data migration on page 227, Chapter 9, SAN Volume Controller operations using the command-line interface on page 467, and Chapter 10, SAN Volume Controller operations using the GUI on page 631 provide detailed explanations about how to create volumes and migrate extents by using the GUI or CLI.
21
Easy Tier will create a report every 24 hours providing information on how Easy Tier would behave if the tier was a multitiered storage pool. So even though Easy Tier extent migration is not possible within a single tier pool, the Easy Tier statistical measurement function is available. The Easy Tier function can make it more appropriate to use smaller storage pool extent sizes. The usage statistics file can be offloaded from the SVC nodes. Then you can use an IBM Storage Advisor Tool to create a summary report. For more detailed information about Easy Tier functionality and more information about statistics generation using IBMs Storage Advisor Tool, see Chapter 7, Easy Tier on page 349.
2.4.11 Hosts
Volumes can be mapped to a host to allow access for a specific server to a set of volumes. A host within the SVC is a collection of HBA worldwide port names (WWPNs) or iSCSI qualified names (IQNs), defined on the specific server. Note that iSCSI names are internally identified by fake WWPNs, or WWPNs that are generated by the SVC. Volumes can be mapped to multiple hosts, for example, a volume that is accessed by multiple hosts of a server system. iSCSI is an alternative means of attaching hosts. However, all communication with back-end storage subsystems, and with other SVC systems, is still through FC. Node failover can be handled without having a multipath driver installed on the iSCSI server. An iSCSI-attached server can simply reconnect after a node failover to the original target IP address, which is now presented by the partner node. To protect the server against link failures in the network or host bus adapter (HBA) failures, using a multipath driver is mandatory. Volumes are LUN masked to the hosts HBA WWPNs by a process called host mapping. Mapping a volume to the host makes it accessible to the WWPNs or iSCSI names (IQNs) that are configured on the host object. For a SCSI over Ethernet connection, the IQN identifies the iSCSI target (destination) adapter. Host objects can have both IQNs and WWPNs.
22
23
this specific storage pool must be the same as the extent size in which you plan to migrate the data to. All of the SVC copy services functions can be applied to image mode disks.
24
The allocation of a specific number of extents from a specific set of MDisks is performed by the following algorithm: if the set of MDisks from which to allocate extents contains more than one MDisk, extents are allocated from MDisks in a round-robin fashion. If an MDisk has no free extents when its turn arrives, its turn is missed and the round-robin moves to the next MDisk in the set that has a free extent. When creating a new volume, the first MDisk from which to allocate an extent is chosen in a pseudo-random way rather than simply choosing the next disk in a round-robin fashion. The pseudo-random algorithm avoids the situation whereby the striping effect inherent in a round-robin algorithm places the first extent for a large number of volumes on the same MDisk. Placing the first extent of a number of volumes on the same MDisk can lead to poor performance for workloads that place a large I/O load on the first extent of each volume, or that create multiple sequential streams.
25
A second copy can be added to a volume with a single copy, or removed from a volume with two copies. Checks prevent the accidental removal of the only remaining copy of a volume. A newly created, unformatted volume with two copies will initially have the two copies in an out-of-synchronization state. The primary copy will be defined as fresh and the secondary copy as stale. The synchronization process will update the secondary copy until it is fully synchronized. This is done at the default synchronization rate or at a rate defined when creating the volume or modifying it. The synchronization status for mirrored volumes is recorded on the quorum disk.
26
If a two-copy mirrored volume is created with the format parameter, then both copies are formatted in parallel and the volume comes online when both operations are complete with the copies in sync. If mirrored volumes are expanded or shrunk, all of their copies are also expanded or shrunk. If it is known that MDisk space, which will be used for creating copies, is already formatted, or if the user does not require read stability, then a no synchronization option can be selected which declares the copies as synchronized (even when they are not). To minimize the time required to resynchronize a copy that has become out of sync, only the 256 KB grains that have been written to since the synchronization was lost, are copied. This approach is known as an incremental synchronization. Only the changed grains need to be copied to restore synchronization. Important: An unmirrored volume can be migrated from one location to another by simply adding a second copy to the desired destination, waiting for the two copies to synchronize, and then removing the original copy 0. This operation can be stopped at any time. The two copies can be in separate storage pools with separate extent sizes. Where there are two copies of a volume, one copy is known as the primary copy. If the primary is available and synchronized, reads from the volume are directed to it. The user can select the primary when creating the volume, or can change it later. Placing the primary copy on a high-performance controller will maximize the read performance of the volume. The write performance will be constrained if one copy is on a lower-performance controller. This is because writes must complete to both copies before the volume can provide acknowledgment to the host that the write completed successfully. Remember that writes to both copies must complete to be considered successfully written even if volume mirroring has one copy in a solid-state drive storage pool and the second copy in a storage pool containing resources from a disk subsystem. A volume with copies can be checked to see whether all of the copies are identical or consistent. If a medium error is encountered while reading from one copy, it will be repaired using data from the other copy. This consistency check is performed asynchronously with host I/O. Important: Mirrored volumes can be taken offline if there is no quorum disk available. This behavior occurs because the synchronization status for mirrored volumes is recorded on the quorum disk. Mirrored volumes consume bitmap space at a rate of 1 bit per 256 KB grain, which translates to 1 MB of bitmap space supporting 2 TB-worth of mirrored volume. The default allocation of bitmap space in 20 MB, which supports 40 TB of mirrored volumes. If all 512 MB of variable bitmap space is allocated to mirrored volumes, 1 PB of mirrored volumes can be supported.
27
virtual capacity available to the host. In a fully allocated volume, these two values will be the same. Thus, the real capacity will determine the quantity of MDisk extents that will be initially allocated to the volume. The virtual capacity will be the capacity of the volume reported to all other SVC components (for example, FlashCopy, Cache, and Remote Copy) and to the host servers. The real capacity is used to store both the user data and the metadata for the thin-provisioned volume. The real capacity can be specified as an absolute value or a percentage of the virtual capacity. Thin-provisioned volumes can be used as volumes assigned to the host; by FlashCopy to implement thin-provisioned FlashCopy targets; and also with the mirrored volumes feature. When a thin-provisioned volume is initially created, a small amount of the real capacity will be used for initial metadata. Write I/Os to grains of the thin volume that have not previously been written to will cause grains of the real capacity to be used to store metadata and the actual user data. Write I/Os to grains that have previously been written to will update the grain where data was previously written. The grain size is defined when the volume is created and can be 32 KB, 64 KB, 128 KB, or 256 KB. Figure 2-8 illustrates the thin-provisioning concept.
Thin-provisioned volumes store both user data and metadata. Each grain of data requires metadata to be stored. This means the I/O rates that are obtained from thin-provisioned volumes will be less than fully allocated volumes. The metadata storage overhead will never be greater than 0.1% of the user data. The overhead is independent of the virtual capacity of the volume. If you are using thin-provisioned volumes in a FlashCopy map, then for best performance use the same grain
28
size as the map grain size. If you are using the thin-provisioned volume directly with a host system, then use a small grain size. Thin-provisioned volume format: Thin-provisioned volumes do not need formatting. A read I/O, which requests data from unallocated data space, will return zeroes. When a write I/O causes space to be allocated, the grain will be zeroed prior to use. However, if the node is a CF8, space is not allocated for a host write that contains all zeros. The formatting flag will be ignored when a thin volume is created or when the real capacity is expanded; the virtualization component will never format the real capacity of a thin-provisioned volume. The real capacity of a thin volume can be changed if the volume is not in image mode. Increasing the real capacity allows a larger amount of data and metadata to be stored on the volume. Thin-provisioned volumes use the real capacity provided in ascending order as new data is written to the volume. If the user initially assigns too much real capacity to the volume, the real capacity can be reduced to free storage for other uses. A thin-provisioned volume can be configured to autoexpand. This feature causes the SVC to automatically add a fixed amount of additional real capacity to the thin volume as required. Autoexpand therefore attempts to maintain a fixed amount of unused real capacity for the volume. This amount is known as the contingency capacity. The contingency capacity is initially set to the real capacity that is assigned when the volume is created. If the user modifies the real capacity, the contingency capacity is reset to be the difference between the used capacity and real capacity. A volume that is created without the autoexpand feature, and thus has a zero contingency capacity, will go offline as soon as the real capacity is used and needs to expand. Autoexpand will not cause the real capacity to grow much beyond the virtual capacity. The real capacity can be manually expanded to more than the maximum that is required by the current virtual capacity, and the contingency capacity will be recalculated. To support the autoexpansion of thin-provisioned volumes, the storage pools from which they are allocated have a configurable capacity warning. When the used capacity of the pool exceeds the warning capacity, a warning event is logged. For example, if a warning of 80% has been specified, the event will be logged when 20% of the free capacity remains. A thin-provisioned volume can be converted nondisruptively to a fully allocated volume, or vice versa, by using the volume mirroring function. For example, you can add a thin-provisioned copy to a fully allocated primary volume and then remove the fully allocated copy from the volume after they are synchronized. The fully allocated to thin-provisioned migration procedure uses a zero-detection algorithm so that grains containing all zeros do not cause any real capacity to be used.
29
rate). Only Read, Write and Verify commands that access the physical medium are subject to I/O governing. The governing rate can be set in I/Os per second or MB per second. It can be altered by changing the throttle value through the svcinfo chvdisk command and specifying the -rate parameter. I/O governing: I/O governing on Metro Mirror or Global Mirror secondary volumes does not affect the data copy rate from the primary. Governing has no effect on FlashCopy or data migration I/O rates. An I/O budget is expressed as a number of I/Os, or MBs, over a minute. The budget is evenly divided between all SVC nodes that service that volume, that is, between the nodes that form the I/O Group of which that volume is a member. The algorithm operates two levels of policing. While a volume on each SVC node receives I/O at a rate lower than the governed level, no governing is performed. However, when the I/O rate exceeds the defined threshold, then adjustments to the policy are made. A check is made every minute to see that each node is continuing to receive I/O below the threshold level. Whenever this check shows that the host has exceeded its limit on one or more nodes, then policing begins for new I/Os. The following conditions exist while policing is in force: A budget allowance is calculated for a one- second period. I/Os are counted over a period of a second. If I/Os are received in excess of the one-second budget on any node in the I/O Group, those I/Os and later I/Os are pended. When the second expires, a new budget is established, and any pended I/Os are redriven under the new budget. This algorithm might cause I/O to backlog in the front-end, which might eventually cause a Queue Full Condition to be reported to hosts that continue to flood the system with I/O. If a host stays within its one-second budget on all nodes in the I/O Group for a period of one minute, then the policing is relaxed and monitoring takes place over the one-minute period as before.
30
The major functions of iSCSI include encapsulation and the reliable delivery of CDB transactions between initiators and targets through the TCP/IP network, especially over a potentially unreliable IP network. The concepts of names and addresses have been carefully separated in iSCSI: An iSCSI name is a location-independent, permanent identifier for an iSCSI node. An iSCSI node has one iSCSI name, which stays constant for the life of the node. The terms initiator name and target name also refer to an iSCSI name. An iSCSI address specifies not only the iSCSI name of an iSCSI node, but also a location of that node. The address consists of a host name or IP address, a TCP port number (for the target), and the iSCSI name of the node. An iSCSI node can have any number of addresses, which can change at any time, particularly if they are assigned by way of Dynamic Host Configuration Protocol (DHCP). An SVC node represents an iSCSI node and provides statically allocated IP addresses. Each iSCSI node, that is, an initiator or target, has a unique iSCSI Qualified Name (IQN), which can have a size of up to 255 bytes. The IQN is formed according to the rules adopted for Internet nodes. The iSCSI qualified name format is defined in RFC3720 and contains (in order) these elements: The string iqn. A date code specifying the year and month in which the organization registered the domain or sub-domain name used as the naming authority string. The organizational naming authority string, which consists of a valid, reversed domain or a subdomain name. Optionally, a colon (:), followed by a string of the assigning organizations choosing, which must make each assigned iSCSI name unique. For SVC, the IQN for its iSCSI target is specified as: iqn.1986-03.com.ibm:2145.<clustername>.<nodename> On a Windows server, the IQN, that is, the name for the iSCSI Initiator, can be defined as: iqn.1991-05.com.microsoft:<computer name> The IQNs can be abbreviated used a descriptive name, known as an alias. An alias can be assigned to an initiator or a target. The alias is independent of the name and does not have to be unique. Because it is not unique, the alias must be used in a purely informational way. It cannot be used to specify a target at login or used during authentication. Both targets and initiators can have aliases. An iSCSI name provides the correct identification of an iSCSI device irrespective of its physical location. Remember, the IQN is an identifier, not an address. Be careful: Before changing system or node names for an SVC system that has servers connected to it by way of iSCSI, be aware that because the system and node name are part of the SVCs IQN, you can lose access to your data by changing these names. The SVC GUI will display a specific warning, but the CLI does not display a warning. The iSCSI session, which consists of a login phase and a full feature phase, is completed with a special command.
31
The login phase of the iSCSI is identical to the FC port login process (PLOGI). It is used to adjust various parameters between two network entities and to confirm the access rights of an initiator. If the iSCSI login phase is completed successfully, the target confirms the login for the initiator; otherwise, the login is not confirmed and the TCP connection breaks. As soon as the login is confirmed, the iSCSI session enters the full feature phase. If more than one TCP connection was established, then iSCSI requires that each command and response pair must go through one TCP connection. Thus, each separate read or write command will be carried out without the necessity to trace each request for passing separate flows. However, separate transactions can be delivered through separate TCP connections within one session. Figure 2-9 illustrates an overview of the various block-level storage protocols and shows where the iSCSI layer is positioned.
32
Figure 2-10 shows an overview of the IP addresses on an SVC node port and illustrates how these IP addresses are moved between the nodes of an I/O Group. The management IP addresses and the iSCSI target IP addresses will fail over to the partner node N2 if node N1 fails (and vice versa). The iSCSI target IPs will fail back to their corresponding ports on node N1 when node N1 is running again.
It is a best practice to keep all of the eth0 ports on all of the nodes in the system on the same subnet. The same applies for the eth1 ports; however, it can be a separate subnet to the eth0 ports. In an SVC system running there is a maximum of 256 iSCSI sessions per SAN volume Controller iSCSI target. You can find detailed examples of the SVC port configuration in Chapter 9, SAN Volume Controller operations using the command-line interface on page 467 and in Chapter 10, SAN Volume Controller operations using the GUI on page 631.
33
Service Location Protocol (SLP) The SVC node runs an SLP daemon, which responds to host requests. This daemon reports the available services on the node, such as the CIMOM service that runs on the configuration node; the iSCSI I/O service can now also be reported. iSCSI Send Target request The host can also send a Send Target request using the iSCSI protocol to the iSCSI TCP/IP port (port 3260).
34
A host multipathing driver for iSCSI is required if you want these capabilities: Protecting a server from network link failures Protecting a server from network failures, if the server is connected through two separate networks Providing load balancing on the serverss network links
35
Metro Mirror and Global Mirror are the IBM branded terms for the functions that are synchronous remote copy and asynchronous remote copy, respectively. Synchronous remote copy ensures that updates are physically committed (not in volume cache) in both the primary and the secondary SVC Clustered systems before the application considers the updates complete; therefore, the secondary is fully up-to-date if it is needed in a failover. However, the application is fully exposed to the latency and bandwidth limitations of the communication link to the secondary. In a truly remote situation, this extra latency can have a significant adverse effect on application performance, hence there is limitation on the distance of Metro Mirror of 300 kilometers (~186 miles.) This will induce latency of approximately 5 microseconds per kilometer, which does not include latency added by the equipment in the path. The nature of synchronous remote copy is that latency for the distance and the equipment in the path will be added directly to your application I/O response times. Special configuration guidelines exist for SAN fabrics that are used for data replication. It is necessary to consider the distance and available bandwidth of the intersite links. The SVC Support Portal contains details regarding these guidelines: http://www-947.ibm.com/support/entry/portal/Overview/Hardware/System_Storage/Stora ge_software/Storage_virtualization/SAN_Volume_Controller_%282145%29 Refer to 8.6, Metro Mirror on page 410 for more details about SVC's synchronous mirroring. In asynchronous remote copy, the application is provided acknowledgement that the write is complete prior to the write actually being committed (written to backing storage) at the secondary. Thus, on a failover, certain updates (data) might be missing at the secondary. The application must have an external mechanism for recovering the missing updates or recovering to a consistent point in time (which is usually a few minutes in the past.) This mechanism can involve user intervention, but in most practical scenarios, it will need to be at least partially automated. Recovery on the secondary site involves assigning the Global Mirror targets from the SVC target system to one or more hosts (which depends on your disaster recovery design) and making those volumes visible on the host and creating any required multipath device definitions. The application must then be started and a recovery procedure to either a consistent point in time or recovery of the missing updates must be performed. This is why the initial state of Global Mirror targets is called crash consistent. This term may sound somewhat daunting, but it just means that the data on the volumes will appear to be in the same state as if an application crash had occurred. Since most applications, such as databases, have had mechanisms for dealing with this type of data state for a long time, it is a fairly mundane operation (depending upon the application). After this application recovery procedure is finished, the application will start normally. Note: When planning your Recovery Point Objective (RPO) you will need to account for application recovery procedures and the length of time they will take and the point at which they may roll back data to. This means that while Global Mirror on an SVC will provide you with typically sub-second RPO times, your effective RPO time maybe up to 5 minutes or longer, depending on the application behavior. Most clients will aim to automate failover or recovery of the remote copy through failover management software. SVC provides Simple Network Management Protocol (SNMP) traps and interfaces to enable this automation. IBM Support for automation is provided by IBM Tivoli Storage Productivity Center for Replication.
36
The Tivoli documentation can also be accessed online at the IBM Tivoli Storage Productivity Center information center: http://publib.boulder.ibm.com/infocenter/tivihelp/v4r1/index.jsp
2.7.2 FlashCopy
FlashCopy is the IBM branded name for Point-in-Time (sometimes called Time-Zero, or T0) copy. This function makes a copy of the blocks on a source volume and can duplicate them on 1 to 256 target volumes. Note: When using the multiple target capability of FlashCopy, if any additional copy (C) is started while there is an existing copy in progress (B) then (C) will have a dependency on (B.) This means that if you terminate (B) then (C) will become invalid. FlashCopy works by creating one or two (for incremental operations) bitmap(s) to track changes to the data on the source volume. This bitmap is also used to present an image of the source data at the Point-in-Time the copy was taken to target host(s) while the actual data is being copied. This capability ensures that copies appear to be instantaneous. Note: In this context, bitmap refers to a special programming data structure that is used to compactly store Boolean values. Do not confuse this with the popular image file format. If your FlashCopy target(s) has existing content, it will be overwritten during the copy operation. This is also true of the no copy (copy rate 0) option where only changed data is copied. After the copy operation has started, the target volume appears to have the contents of the source volume as it existed at the Point-in-Time the copy was initiated. Although the physical copy of the data takes an amount of time that varies based on system activity and configuration, the resulting data at the target appears as though the copy was made instantaneously. FlashCopy permits the management operations to be coordinated, via a grouping of FlashCopy pairs, so that a common single Point-in-Time is chosen for copying target volumes from their respective source volumes. This capability allows a consistent copy of data for application which span multiple volumes. SVC also permits source and target volumes for FlashCopy to be thin-provisioned volumes. FlashCopies to or from thinly provisioned volumes allow duplication of data while consuming less space. These types of volumes are dependant on the change rate of the data, and typically should be used in time-limited existance scenarios, as over time they have the potential of filling the physical space they were allocated. Reverse FlashCopy enables target volumes to become restore points for the source volume without breaking the FlashCopy relationship and without having to wait for the original copy operation to complete. SVC supports multiple targets and thus multiple rollback points. In most practical scenarios the FlashCopy functionality of the SVC should be integrated into a process or procedure that allows the benefits of the Point-in-Time Copies to be leveraged to address business needs. IBM offers Tivoli Storage FlashCopy Manager for this functionality. You may read more about Tivoli Storage FlashCopy Manager at: http://www-01.ibm.com/software/tivoli/products/storage-flashcopy-mgr/ Most clients aim to integrate the FlashCopy feature for point in time copies and quick recovery of their applications and databases.
37
You can read a detailed description of FlashCopy copy services in Chapter 8, Advanced Copy Services on page 373.
Although the SVC code is based on a purpose-optimized Linux kernel, the clustered system feature is not based on Linux clustering code. The clustered system software used within SVC, that is, the event manager cluster framework, is based on the outcome of the COMPASS research project. It is the key element that isolates the SVC application from the underlying hardware nodes. The clustered system software makes the code portable and provides the means to keep the single instances of the SVC code running on separate systems nodes in sync. Node restarts (during a code upgrade), adding new nodes, or removing old nodes from a system or node failures therefore cannot impact the SVCs availability. It is key for all active nodes of a system to know that they are members of the system. Especially in situations such as the split-brain scenario where single nodes lose contact with other nodes, it is key to have a solid mechanism to decide which nodes form the active system. A worst case scenario is a system that splits into two separate. Within an SVC system, the voting set and a quorum disk are responsible for the integrity of the system. If nodes are added to a system, they get added to the voting set. If nodes are removed, they will also quickly be removed from the voting set. Over time the voting set, and thus the nodes in the system, can completely change so that the system has migrated onto a completely separate set of nodes from the set on which it started. The SVC clustered system implements a dynamic quorum. Following a loss of nodes, if the system can continue operation, it will adjust the quorum requirement so that further node failure can be tolerated. The lowest Node Unique ID in a system becomes the boss node for the group of nodes, and it proceeds to determine (from the quorum rules) whether the nodes can operate as the system. This node also presents the maximum two-cluster IP addresses on one or both of its nodes Ethernet ports to allow access for system management.
39
Note: To be considered eligible as a quorum disk, an LUN must meet the following criteria: It must be presented by a disk subsystem that is supported to provide SVC quorum disks. It has been manually allowed to be a quorum disk candidate using the svctask chcontroller -allow_quorum yes command. It must be in managed mode (no image mode disks). It must have sufficient free extents to hold the system state information, plus the stored configuration metadata. It must be visible to all of the nodes in the system. If possible, the SVC will place the quorum candidates on separate disk subsystems. After the quorum disk has been selected, however, no attempt is made to ensure that the other quorum candidates are presented through separate disk subsystems. Important: Quorum disk placement verification and adjustment to separate storage systems (if possible) reduces the dependency from a single storage system and can increase the Quorum disk availability significantly. Quorum disk candidates and the active quorum disk in a system can be listed by the svcinfo lsquorum command. When the set of quorum disk candidates has been chosen, it is fixed. However, a new quorum disk candidate can be chosen in one of these conditions: When the administrator requests that a specific MDisk is to become a quorum disk by using the svctask setquorum command When an MDisk that is a quorum disk is deleted from a storage pool When an MDisk that is a quorum disk changes to image mode An offline MDisk will not be replaced as a quorum disk candidate. For disaster recovery purposes a system needs to be regarded as a single entity, so the system and the quorum disk need to be colocated. There are special considerations concerning the placement of the active quorum disk for a stretched or split cluster and split I/O Group configurations. Details are available at this website: http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003311 Important: Running an SVC system without a quorum disk can seriously affect your operation. A lack of available quorum disks for storing metadata will prevent any migration operation (including a forced MDisk delete). Mirrored volumes can be taken offline if there is no quorum disk available. This behavior occurs because synchronization status for mirrored volumes is recorded on the quorum disk. During the normal operation of the system, the nodes communicate with each other. If a node is idle for a few seconds, a heartbeat signal is sent to ensure connectivity with the system. If a node fails for any reason, the workload that is intended for it is taken over by another node until the failed node has been restarted and readmitted into the system (which happens automatically). If the microcode on a node becomes corrupted, resulting in a failure, the
40
workload is transferred to another node. The code on the failed node is repaired, and the node is readmitted into the system (again, all automatically).
2.8.3 Cache
The primary benefit of storage cache is to improve I/O response time. Reads and writes to a magnetic disk drive suffer from both seek and latency time at the drive level, which can result in from one to 10 ms of response time (for an enterprise-class disk). The new 2145-CF8 nodes combined with SVC provide 24 GB memory per node, or 48 GB per I/O Group, or 192 GB per SVC system. The SVC provides a flexible cache model, and the nodes memory can be used as read or write cache. The size of the write cache is limited to a maximum of 12 GB of the nodes memory. Dependent on the current I/O conditions on a node, the entire 24 GB of memory can be fully used as read cache. Cache is allocated in 4 KB segments. A segment will hold part of one track. A track is the unit of locking and destage granularity in the cache. The cache virtual track size is 32 KB (eight segments). A track might only be partially populated with valid pages. The SVC coalesces writes up to 256 KB track size if the writes reside in the same tracks prior to destage; for example, if 4 KB is written into a track, another 4 KB is written to another location in the same track. Therefore, the blocks written from the SVC to the disk subsystem can be any size between 512 bytes up to 256 KB.
41
When data is written by the host, the preferred node within the I/O Group saves the data in its cache. Before the cache returns completion to the host, the write must be mirrored to the partner node, or copied in the cache of its partner node, for availability reasons. After having a copy of the written data, the cache returns completion to the host. A volume that has not received a write updates during the last two minutes will automatically have all modified data destaged to disk. If one node of an I/O Group is missing, due to a restart or a hardware failure, the remaining node empties all of its write cache and proceeds in a operation mode, which is referred to as write-through mode. A node operating in write-through mode writes data directly to the disk subsystem before sending an I/O complete status message back to the host. Running in this mode can degrade the performance of the specific I/O Group. Write cache is partitioned by storage pool. This feature restricts the maximum amount of write cache that a single storage pool can allocate in a system. Table 2-3 shows the upper limit of write cache data that a single storage pool in a system can occupy.
Table 2-3 Upper limit of write cache per storage pool One storage pool 100% Two storage pools 66% Three storage pools 40% Four storage pools 33% More than four storage pools 25%
For in-depth information about SVC cache partitioning, it is important to read IBM SAN Volume Controller 4.2.1 Cache Partitioning, REDP-4426, which is available at this website: http://www.redbooks.ibm.com/abstracts/redp4426.html?Open An SVC node will treat part of its physical memory as non-volatile. Non-volatile means that its contents are preserved across power losses and resets. Bitmaps for Flash Copy and Remote Mirroring relationships, the Virtualization Table and the Write Cache are items in the non-volatile memory. In the event of a disruption or external power loss, the physical memory is copied to a file in the file system on the nodes internal disk drive, so that the contents can be recovered when external power is restored. The uninterruptible power supply units, which are delivered with each nodes hardware, ensure that there is sufficient internal power to keep a node operational to perform this dump when external power is removed. After dumping the content of the non-volatile part of the memory to disk, the SVC node shuts down.
42
Management console
The management console for SVC is referred to as the IBM System Storage Productivity Center (SSPC). SSPC is a hardware and software solution that includes a suite of storage infrastructure management software that can centralize, automate, and simplify the management of complex and heterogeneous storage environments.
43
Figure 2-11 on page 44 provides an overview of the SVC management components. We describe the details in Chapter 4, SAN Volume Controller initial configuration on page 105. You can obtain further details about the IBM System Storage Productivity Center in IBM System Storage Productivity Center Users Guide Version 1 Release 4, SC27-2336, and in IBM System Storage Productivity Center Introduction and Planning Guide, SC23-8824.
Remote authentication means, the validation of a users permission to access the SVCs
management CLI/GUI is performed an a remote authentication server. That is, except for the superuser account, there is no need to administer local user accounts on the SVC. An existing user management in your environment can be used to control SVC user access, implementing a single sign-on solution for the SVC hereby.
44
authentication via native LDAP was introduced. Supported types of LDAP servers are IBM Tivoli Directory Server, Microsoft Active Directory (MS AD) and OpenLDAP, for example running on a Linux system. Users authenticated by an LDAP servers can log on to the SVC web-based GUI and the CLI; unlike with remote authentication via Tivoli TIP, users do not need to be configured locally for CLI access. An SSH key is not required for CLI login in this scenario either. However, locally administered users can co-exist with remote authentication enabled. The default administrative user superuser must be a local user, it neither can be deleted nor manipulated except for password and/or SSH key. Multiple LDAP servers can be defined if available for availability reasons. Authentication requests are processed by those LDAP servers marked as preferred unless the connections fail or a user is not found. Requests are distributed across all preferred servers for load balancing in a round-robin fashion. A user, that is authenticated remotely by an LDAP server is granted permissions on the SVC according to the role assigned to the group it is a member of. That is, any SVC user group with its assigned role - for example CopyOperator - must exist with an identical name on the SVC system and on the LDAP server, if users in that role are to be authenticated remotely. Prerequisites: Either native LDAP authentication or Tivoli TIP may be selected, but not both If more than one LDAP server is defined, they all must be of the same type, e.g. MS AD The SVC user group must be enabled for remote authentication The user group name must be identical in the SVC user group management and on the LDAP server, and it is case-sensitive The LDAP server must transmit a group membership attribute for the user, the default attribute name for MS AD and OpenLDAP is memberOf, for Tivoli Directory Server it is ibm-allGroups. For OpenLDAP implementations it might be necessary to configure the memberOf overlay if its not in place In the following example we will demonstrate LDAP user authentication using a Microsoft Windows Server 2008 R2 domain controller acting as LDAP server. The first step is to configure the SVC for Remote Authentication in Settings > Directory Services as shown in Figure 2-12.
Click on Configure Remote Authentication and select the authentication type, shown in Figure 2-13 on page 46. Check LDAP and click Next.
45
In step 2, shown in Figure 2-14 several parameters have to be configured: LDAP Type: select Microsoft Active Directory; the type of LDAP server to use for an OpenLDAP server would be Other Security: choose None respectively Transport Layer Security if your LDAP server requires a secure connection; the LDAP servers certificate will be configured later Click on Advanced Settings to expand the bottom part: leave the User Name and Password field empty, if your LDAP server supports anonymous bind. For our MS AD server enter the credentials of an existing user on the LDAP server with permission to query the LDAP directory. It can be entered either in the format of an email address, e.g. administrator@itso.corp, or in the distinguished format, e.g. cn=Administrator,cn=users,dc=itso,dc=corp. Note the common name portion cn=users for MS AD servers. In case your LDAP server uses different Attributes than the predefined ones, they can be edited here. There should be no need to edit them when MS AD is used as LDAP service.
Figure 2-15 on page 47 shows step 3, where the LDAP server details are configured. 1. Enter the IP Address of at least one LDAP server 46
IBM System Storage SAN Volume Controller V6.3
2. Even though it is marked as optional, it may be required to enter a Base DN in the distinguished name format, which defines the starting point in the directory where to search for users, for example dc=itso,dc=corp 3. Additional LDAP servers can be added by clicking the green plus icon. 4. Check the Preferred LDAP servers to be used if desired. 5. Click Finish to save the settings.
Now that we have enabled and configured the SVC for Remote Authentication, we will take care of the user groups. For remote authentication through LDAP no local SVC users have to be maintained, but the user groups have to be set up properly. The existing built-in SVC user groups may be used as well as groups created in the SVC user management. However, using self-defined groups might be advisable to avoid SVC default groups interfering with already existing group names on the LDAP server. Any user group, built-in or self-defined, has to be enabled for remote authentication. As shown in Figure 2-16 on page 47 we create a new user group in Access > Users > New User Group:
1. Assign a meaningful Group Name, e.g. SVC_LDAP_CopyOperator according to its intended role 2. Select the desired Role: Copy Operator 3. Mark LDAP - Enable for this group and click on Create These settings can be modified in a groups properties at any time. Next we create a group with exactly the same name on the LDAP server, that is in the Active Directory Domain:
47
1. On the Domain Controller launch the Active Directory Users and Computers management console, navigate in your domain structure to the entity containing the user groups. Click on the button shown in Figure 2-17 to create a new group.
2. Enter exactly the same name - it is case sensitive - in the Group Name field, shown in Figure 2-18 on page 48. Select the correct Group scope for your environment and select Security for Group type and click on OK.
3. Edit the users properties, which shall be able to logon to the SVC, and make it a Member Of the appropriate user group for the intended SVC role, shown in Figure 2-19, and click OK to save and apply the settings.
48
At this point we are ready for the authentication of users for the SVC through the remote server. To make sure that everything will work properly some basic functionality tests should be made to verify the communication between SVC and the configured LDAP service: 1. In the Settings > Directory Services screen select Global Actions > Test LDAP connections, shown in Figure 2-20.
49
2. In the next step a real user authentication attempt will be tested. In the Settings > Directory Services screen select Global Actions > Test LDAP connections as shown in Figure 2-22.
3. As shown in Figure 2-23, enter the User Credentials of a user which was defined on the LDAP server and click on Test.
Again, a message will be displayed after a successful test: CMMVC7148I Task completed successfully. Both, the LDAP connection and authentication test must have completed successfully to ensure the LDAP authentication will work properly. In case, an error message points to user authentication problems during the LDAP authentication test, it may be helpful, to analyze the LDAP servers response outside the SVC. This can be done using any native LDAP query
50
tool, for example the free software LDAPBrowser or, for a pure MS AD environment, Microsoft Sysinternals ADExplorer. These tools are available at: http://www.ldapbrowser.com/ http://technet.microsoft.com/en-us/sysinternals/bb963907 In the example output of LDAP Browser in Figure 2-24 the first Common Name (CN) component of the memberOf attribute must match the SVC user groups name created earlier: SVC_LDAP_CopyOperator.
Assuming that the LDAP connection and the authentication test succeeded, users are able to logon to the SVC GUI and CLI using their network credentials, for example, their Windows domain user name and password. Figure 2-25 shows the WebGUI logon screen with the Windows domain credentials entered. A user can login with either its short name (that is without the domain component) or with the fully qualified username in the form of an e-mail address:
51
After a successful login the user name is displayed in a welcome message at the top of the screen as shown in Figure 2-26.
Also, CLI login is possible with either the short username or the fully qualified name. Figure 2-27 shows the CLI login using PuTTY authenticated remotely. The CLI command lscurrentuser displays the user name of the currently logged in user and also its role.
52
53
The access rights for a user belonging to a specific user group are defined by the role that is assigned to the user group. It is the role that defines what a user can or cannot do on an SVC system. Table 2-5 on page 54 shows the roles ordered (from the top) by starting with the least privileged Monitor role down to the most privileged SecurityAdmin role. There is no special user group for the NasSystem role.
Table 2-5 Commands permitted for each role Role Monitor Allowed commands All svcinfo or informational commands, plus: svctask finderr, dumperrlog, dumpinternallog,chcurrentuser, ping, svcconfig backup svqueryclock All commands allowed for Monitor role, plus: applysoftware, setlocale, addnode, rmnode, cherrstate,writesernum, detectmdisk, includemdisk, clearerrlog, cleardumps, settimezone, stopcluster,startstats, stopstats, and setsystemtime All commands allowed for Monitor role, plus: prestartfcconsistgrp, startfcconsistgrp, stopfcconsistgrp, chfcconsistgrp, prestartfcmap, startfcmap, stopfcmap, chfcmap, startrcconsistgrp, stoprcconsistgrp, switchrcconsistgrp, chrcconsistgrp, startrcrelationship, stoprcrelationship, switchrcrelationship,chrcrelationship, and chpartnership All commands, except: chauthservice, mkuser, rmuser, chuser, mkusergrp,rmusergrp, chusergrp, and setpwdreset All commands except those allowed by the NasSystem role svctask: addmember, activatemember, expelmember Create and delete filesystem VDisks.
Service
CopyOperator
Administrator
SecurityAdmin NasSystem
54
55
The authentication service supported by SVC is the Tivoli Embedded Security Services server component level 6.2. The Tivoli Embedded Security Services server provides the following key features: Tivoli Embedded Security Services isolates the SVC from the actual directory protocol in use, which means that the SVC communicates only with Tivoli Embedded Security Services to get its authentication information. The type of protocol that is used to access the central directory or the kind of the directory system that is used is transparent to SVC. Tivoli Embedded Security Services provides a secure token facility that is used to enable single sign-on (SSO). SSO means that users do not have to log in multiple times when using what appears to them to be a single system. It is used within Tivoli Productivity Center. When SVC access is launched from within Tivoli Productivity Center, the user will not have to log on to the SVC, because the user has already logged in to Tivoli Productivity Center.
56
Services server. If the HTTP option is used, the user and password information is transmitted in clear text over the IP network. 2. Configure user groups on the system matching those user groups that are used by the authentication service. For each group of interest that is known to the authentication service, there must be an SVC user group with the same name and the remote setting enabled. For example, you can have a group called sysadmins, whose members require the SVC Administrator role. Configure this group using the following command: svctask mkusergrp -name sysadmins -remote -role Administrator If none of a users groups match any of the SVC user groups, the user is not permitted to access the system. 3. Configure users that do not require SSH access. Any SVC users that are to be used with the remote authentication service and do not require SSH access need to be deleted from the system. The superuser cannot be deleted; it is a local user and cannot use the remote authentication service. 4. Configure users that do require SSH access. Any SVC users that are to be used with the remote authentication service and do require SSH access must have their remote setting enabled and the same password set on the system and the authentication service. The remote setting instructs SVC to consult the authentication service for group information after the SSH key authentication step to determine the users role. The need to configure the users password on the system in addition to the authentication service is due to a limitation in the Tivoli Embedded Security Services server software. 5. Configure the system time. For correct operation, both the SVC system and the system running the Tivoli Embedded Security Services server must have the exact same view of the current time; the easiest way is to have them both use the same Network Time Protocol (NTP) server. Failure to follow this step can lead to poor interactive performance of the SVC user interface or incorrect user-role assignments. Also, Tivoli Storage Productivity Center leverages the Tivoli Integrated Portal infrastructure and its underlying WebSphere Application Server capabilities to make use of an LDAP registry and enable single sign-on (SSO). You can obtain more information about implementing SSO within Tivoli Storage Productivity Center 4.1 in the chapter about LDAP authentication support and single sign-on in IBM Tivoli Storage Productivity Center V4.1 Release Guide, SG24-7725, which is available at this website: http://www.redbooks.ibm.com/redpieces/abstracts/sg247725.html?Open
57
Note: Since SVC v6.2 and with the 2145-CG8 hardware, the IBM System Storage SAN Volume Controller Storage Engine offers 10 Gigabit Ethernet connectivity. For more information about this topic, see:
http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=an&subtype=ca&appname=gpateam &supplier=897&letternum=ENUS111-083
This solution includes a Common Information Model (CIM) Agent to enable unified storage management based on open standards for units that comply with CIM Agent standards. The new SVC 2145-CG8 Storage Engine has the following key hardware features: Intel Core i7 Xeon 5500 2.53 GHz quad-core processor (Westmere) 24 GB memory base, and up to 128 GB Four 2/4/8 Gbps FC ports Up to four solid-state drives, enabling scale-out high performance solid-state drive support Two, redundant power supplies A 19-inch rack-mounted enclosure IBM Systems Director Active Energy Manager-enabled 1U high The 2145-CG8 nodes can be easily integrated within existing SVC clustered systems. The nodes can be intermixed in pairs within existing SVC systems. Mixing node types in a system results in volume performance characteristics dependant on the node type in the volumes I/O Group. The standard nondisruptive clustered system upgrade process can be used to replace older engines with new 2145-CG8 engines, see IBM SAN Volume Controller Software Installation and Configuration Guide, GC27-2286, for more information about this topic. Refer to the following link for integration into existing clustered systems, compatibility and interoperability with installed nodes and UPSs: http://www-01.ibm.com/support/docview.wss?uid=ssg1S1002999 The 2145-CG8 is shipped with preloaded v6.2 software. Figure 2-30 shows the front-side view of the SVC 2145-CG8 node.
Remember that several SVC features, such as iSCSI, are software features and are therefore available on all nodes types running SVC V5.1 or above.
58
Table 2-7 shows the rules that apply with respect to the number of interswitch link (ISL) hops allowed in a SAN fabric between SVC nodes or the system.
Table 2-7 Number of supported ISL hops Between nodes in an I/O Group 0 (connect to the same switch) Between nodes in separate I/O Groups 0 (connect to the same switch) Between nodes and the disk subsystem 1 (recommended: 0, connect to the same switch) Between nodes and the host server Maximum 3
59
The system configuration node can be accessed on either eth0 or eth1. The system can have two IPv4 and two IPv6 addresses that are used for configuration purposes (CLI or CIMOM access). The clustered system can therefore be managed by SSH clients or GUIs on System Storage Productivity Centers on separate physical IP networks. This capability provides redundancy in the event of a failure of one of these IP networks. Support for iSCSI introduces one additional IPv4 and one additional IPv6 address for each SVC node port; these IP addresses are independent of the system configuration IP addresses. See Figure 2-10 on page 33 for an IP address overview.
The actual times shown are not that important, but note the dramatic difference between accessing data that is located in cache and data that is located on external disk. We have added a second scale to Figure 2-31, which gives you an idea of how long it takes to access the data in a scenario where a single CPU cycle takes 1 second. This scale gives you an idea of the importance of future storage technologies closing or reducing the gap between
60
access times for data stored in cache/memory versus access times for data stored on a external medium. Since magnetic disks were first introduced by IBM in 1956 (RAMAC), they have shown a remarkable performance regarding capacity growth, form factor/size reduction, price decrease ($/GB), and reliability. However, the number of I/Os that a disk can handle and the response time that it takes to process a single I/O have not improved at the same rate, although they have certainly improved. In actual environments, we can expect from todays enterprise-class FC serial-attached SCSI (SAS) disk up to 200 IOPS per disk with an average response time (a latency) of approximately 6 ms per I/O. To summarize, todays rotating disks continue to advance in capacity (several TBs), form factor/footprint (3.5 inches, 2.5 inches, and 1.8 inches), and price ($/GB), but they are not getting much faster. The limiting factor is the number of revolutions per minute (RPM) that a disk can perform (say 15,000). This factor defines the time that is required to access a specific data block on a rotating device. There will likely be small improvements in the future, but a big step, such as doubling the RPM, if technically even possible, inevitably has an associated increase in power consumption and a price that will be an inhibitor.
61
You can obtain details of Storage Class Memory at this website: http://tinyurl.com/plk7as You can read a comprehensive and worthwhile overview of the solid-state drive technology in a subset of the well known Spring 2010 and 2009 SNIA Technical Tutorials, which are available on the SNIA website: http://www.snia.org/education/tutorials/2010/spring#solid When these technologies become a reality, it will fundamentally change the architecture of todays storage infrastructures.
Internal SSD
Some SVC models support 2.5 inches solid-state drives as internal storage. A maximum of 4 drives can be installed per node and up to 32 drives in a clustered system. These drives can be used to create RAID managed disks that in turn can be used to create volumes. Internal solid-state drives can be configured in the following two RAID levels: RAID-1/10: In this configuration one half of the mirror will be in each node of the I/O group providing redundancy in case of a node failure. RAID-0: In this configuration all the drives will be assigned to the same node. This configuration is intended to be used with VDisk Mirroring since no redundancy is provided in case of a node failure.
External SSD
The SVC is able to manage solid-state drives in externally attached storage controllers or enclosures. The solid-state drives would be configured as an array with a LUN and be presented to the SVC as a normal MDisk. The solid-state MDisk tier then needs to be set by the chmdisk -tier generic_ssd command or the GUI. The SSD MDisks can then be placed into a single SSD tier storage pool and high workload volumes manually selected and placed into the pool to gain the performance benefits of SSDs. For a more effective use of SSDs, place the SSD MDisks into a multitiered storage pool combined with HDD MDisks (generic_hdd tier). Then, with Easy Tier turned on, it will automatically detect and migrate high workload extents onto the solid-state MDisks.
62
Easy Tier monitors the host I/O activity and latency on the extents of all volumes with the Easy Tier function turned on in a multitier storage pool over a 24-hour period. It then creates an extent migration plan based on this activity, and will dynamically move high activity or hot extents to a higher tier within the storage pool. It will also move extents whose activity has dropped off or cooled from the high tier MDisks back to a lower tiered MDisk. Because this migration works at the extent level and not at the volume level, it is often referred to as sub-LUN migration. The Easy Tier function may be turned on or off at the storage pool and volume level.
2.13.1 SVC 6.3 supported hardware list, device driver, and firmware levels
With the SVC 6.3 release, as in every release, IBM offers functional enhancements and new hardware that can be integrated into existing or new SVC systems and also interoperability enhancements or new support for servers, SAN switches, and disk subsystems. See the most current information at this website: http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html
63
In this stretched cluster configuration, SVC enables a highly-available stretched volume to be concurrently accessed by servers at both data centers. When combined with server data mobility functions such as VMware vMotion or PowerVM Live Partition Mobility, SVC stretched cluster enables nondisruptive storage and virtual machine mobility between the two data centers. Depending on application performance requirements, SVC stretched clusters may be deployed between data centers up to 300 km apart. SAN Volume Controller stretched clusters may be combined with SVC Metro Mirror or Global Mirror to support a third data center for applications that require both high availability and disaster recovery in a single solution Round-robin data paths to attached storage System performance improvements are available with improved management of the I/O paths to attached storage systems. This round-robin method allows more flexibility for data paths and provides greater performance, especially in the event that a data path goes down. Lower-bandwidth global mirror Customers who wish to use the global mirror capability with SAN Volume Controller can now do so on a lower-bandwidth link between sites. Remote mirroring with the SAN Volume Controller now supports higher recovery point objective (RPO) times by allowing the data at the disaster recovery site to get further out of sync with the production site if the communication link limits replication, and then approaches synchronicity again when the link is not as busy. This lower-bandwidth remote mirroring uses space efficient FlashCopy targets as sources in remote copy relationships to increase the time allowed to complete a remote copy data cycle. Remote mirroring between SVC and Storwize V7000 Customers have greater flexibility in their expanding environments using both Storwize V7000 and SAN Volume Controller with the ability now to remote mirror from one system to the other. Remote deployments for disaster recovery for current SAN Volume Controller environments can easily be fitted with the Storwize V7000 or vice versa. This new function does not affect how Metro/Global Mirror on the SVC or Remote Mirroring on the Storwize V7000 is licensed. You must still license usage for volumes replicated at the source and the target. Because of a difference in metrics, SVC mirroring can be licensed for a subset of the total storage virtualized, but Storwize V7000 mirroring is licensed for the entire storage system. Added interoperability SVC now supports many more heterogeneous data center environments with the addition of interoperability support for:
64
RedHat Enterprise Linux 6 VMware vSphere 5 IBM XIV Gen3 Bull StoreWay models 1500, 2000, 3000 Fujitsu ETERNUS models DX80 S2, DX90 S2, DX410 S2, and DX440 S2 HP 3PAR models F200, F400, T400, and T800 Texas Memory Systems RamSan-440 Violin Flash Memory Array models 3140 and 3200 For all the specific models and host environments supported, visit http://www.ibm.com/storage/support/2145
65
66
Chapter 3.
67
7. Determine the SVC service IP address and the IBM System Storage Productivity Center (SVC console). 8. Determine the IP addresses for the SVC system and for the host that is connected through iSCSI connections. 9. Define a naming convention for the SVC nodes, the host, and the storage subsystem. 10.Define the managed disks (MDisks) in the disk subsystem. 11.Define the Storage Pools. The Storage Pools depend on the disk subsystem in place and the data migration requirements. 12.Plan the logical configuration of the volume within the I/O Groups and the Storage Pools in such a way as to optimize the I/O load between the hosts and the SVC. 13.Plan for the physical location of the equipment in the rack. SVC planning can be categorized into two types: Physical planning Logical planning We describe these planning types in more detail in the following sections.
69
2145 UPS-1U
The 2145 Uninterruptible Power Supply-1U (2145 UPS-1U) is one EIA unit high and is shipped, and can only operate, on the following node types: SAN Volume Controller 2145-CG8 SAN Volume Controller 2145-CF8 SAN Volume Controller 2145-8A4 SAN Volume Controller 2145-8G4 SAN Volume Controller 2145-8F4 When configuring the 2145 UPS-1U, the voltage that is supplied to it must be 200 - 240 V, single phase. Tip: The 2145 UPS-1U has an integrated circuit breaker and does not require external protection.
70
There are guidelines to follow for Fibre Channel (FC) cable connections. Occasionally, the introduction of a new SVC hardware model means that there are internal changes. One example is the worldwide port name (WWPN) mapping in the port mapping. The 2145-8A4, 2145-8G4, 2145-CF8 and 2145 CG8 have the same mapping. Figure 3-2 on page 72 shows the WWPN mapping.
71
Figure 3-3 on page 73 shows a sample layout where nodes within each I/O Group have been split between separate racks. This protects against power failures and other events that only affect a single rack.
72
73
Volume configuration Host mapping (LUN masking) Advanced Copy Service functions SAN boot support Data migration from non-virtualized storage subsystems SVC configuration backup procedure
management IP add. 10.11.12.120 service IP add. 10.11.12.121 Each node in an SVC clustered system needs to have at least one Ethernet connection. Starting with SVC 6.1, the system management is performed through an embedded GUI running on the nodes. A separate console such as the traditional SVC Hardware Management Console (HMC) or IBM System Storage Productivity Center (SSPC) is no longer required to access the management interface. To access the management GUI you direct a web browser at the system management IP address. The clustered system must first be created specifying either an IPv4 or an IPv6 system address for port 1. After the clustered system is created, additional IP addresses can be created on port 1 and port 2 until both ports have an IPv4 and an IPv6 address defined. This allows the system to be managed on separate networks, which provides redundancy in the event of a network failure. Figure 3-4 on page 75 shows the IP configuration possibilities.
74
Support for iSCSI provides one additional IPv4 and one additional IPv6 address for each Ethernet port on every node. These IP addresses are independent of the clustered system configuration IP addresses. The SVC model 2145-CG8 can optionally have a SAS adapter with external ports disabled or a high speed 10 Gbps ethernet adapter with two ports, two additional IPv4 or IPv6 addresses are required. When accessing the SVC through the GUI or Secure Shell (SSH), choose one of the available IP addresses to connect to. There is no automatic failover capability so if one network is down, use an IP address on the alternate network. Clients may be able to use intelligence in domain name servers (DNS) to provide partial failover.
75
to the SVC system. The SVC nodes within an SVC system must be able to see each other and all of the storage that is assigned to the SVC system. The zoning capabilities of the SAN switch are used to create three distinct zones. SVC 6.3 supports 2 Gbps, 4 Gbps, or 8 Gbps FC fabric, depending on the hardware platform and on the switch where the SVC is connected. In an environment where you have a fabric with multiple speed switches, best practice is to connect the SVC and the disk subsystem to the switch operating at the highest speed. All SVC nodes in the SVC clustered system are connected to the same SANs, and they present volumes to the hosts. These volumes are created from Storage Pools that are composed of MDisks presented by the disk subsystems. There must be three distinct zones in the fabric. SVC clustered system zone: Create one zone per fabric with all of the SVC ports cabled to this fabric to allow SVC internode communication. Host zones: Create an SVC host zone for each server accessing storage from the SVC system. Storage zone: Create one SVC storage zone for each storage subsystem that is virtualized by the SVC.
Configure your SAN so that FC traffic can be passed between the two clustered systems. To configure the SAN this way, you can connect the clustered systems to the same SAN, merge the SANs, or use routing technologies. 76
IBM System Storage SAN Volume Controller V6.3
Configure zoning to allow all of the nodes in the local fabric to communicate with all of the nodes in the remote fabric. Optionally, modify the zoning so that the hosts that are visible to the local clustered system can recognize the remote clustered system. This capability allows a host to have access to data in both the local and remote clustered systems. Verify that clustered system A cannot recognize any of the back-end storage that is owned by clustered system B. A clustered system cannot access logical units (LUs) that a host or another clustered system can also access. Figure 3-5 shows the SVC zoning topology.
Figure 3-6 on page 78 shows an example of SVC, host, and storage subsystem connections.
77
Figure 3-6 Example of SVC, host, and storage subsystem connections You must also observe the following additional guidelines: LUNs (MDisks) must have exclusive access to a single SVC clustered system and cannot be shared between other SVC clustered systems or hosts. A storage controller can present LUNs to both the SVC (as MDisks) and to other hosts in the SAN. However, in this case it is better to avoid having SVC and hosts share the same storage ports. Mixed port speeds are not permitted for intracluster communication. All node ports within a clustered system must be running at the same speed. ISLs are not to be used for intracluster node communication or node-to-storage controller access. The switch configuration in an SVC fabric must comply with the switch manufacturers configuration rules, which can impose restrictions on the switch configuration. For example, a switch manufacturer might limit the number of supported switches in a SAN. Operation outside of the switch manufacturers rules is not supported. Host bus adapters (HBAs) in dissimilar hosts or dissimilar HBAs in the same host need to be in separate zones. For example, if you have AIX and Microsoft hosts, they need to be in separate zones. In this case, dissimilar means that the hosts are running separate operating systems or are using separate hardware platforms. Therefore, various levels of the same operating system are regarded as similar. Note that this requirement is a SAN interoperability issue, rather than an SVC requirement. Host zones are to contain only one initiator (HBA) each, and as many SVC node ports as you need, depending on the high availability and performance that you want from your configuration.
78
Attention: Be aware of the following considerations. The use of ISLs for intracluster node communication can negatively impact the availability of the system due to the high dependency on the quality of these links to maintain heartbeat and other system management services. Therefore it is strongly advised that they only be used as part of an interim configuration to facilitate SAN migrations, and not be part of the architected solution. The use of ISLs for SVC node to storage controller access can lead to port congestion, which can negatively impact the performance and resiliency of the SAN. Therefore it is strongly advised that they only be used as part of an interim configuration to facilitate SAN migrations, and not be part of the architected solution. With SVC 6.3 you can use ISLs between nodes but they must be in a dedicated SAN, Virtual SAN (CISCO Technology), or Logical SAN (Brocade technology) The use of mixed port speeds used for intercluster communication can lead to port congestion, which can negatively impact the performance and resiliency of the SAN and is therefore not supported.
You can use the lsfabric command to generate a report that displays the connectivity between nodes and other controllers and hosts. This report is particularly helpful in diagnosing SAN problems.
Zoning examples
Figure 3-7 shows an SVC clustered system zoning example.
79
80
81
You can set up the equivalent configuration with only IPv6 addresses. Figure 3-11 shows the use of IPv4 management and iSCSI addresses in two separate subnets.
Figure 3-13 on page 83 shows the use of a redundant network and a third subnet for management. 82
IBM System Storage SAN Volume Controller V6.3
Figure 3-14 shows the use of a redundant network for both iSCSI data and management.
Be aware of these considerations: All of the examples are valid using IPv4 and IPv6 addresses. It is valid to use IPv4 addresses on one port and IPv6 addresses on the other port. It is valid to have separate subnet configurations for IPv4 and IPv6 addresses.
83
84
In general, configure disk subsystems as though there is no SVC. However, we suggest the following specific guidelines: Disk drives Exercise caution with large disk drives so that you do not have too few spindles to handle the load. Using RAID-5 is suggested for the vast majority of workloads. Array sizes 8+P or 4+P is suggested for the DS4000 and DS5000 families, if possible. Use the DS4000 segment size of 128 KB or larger to help the sequential performance. Upgrade to EXP810 drawers, if possible. Create LUN sizes that are equal to the RAID array and rank size (if the array size is >2 TB and the disk subsystem does not support greater than 2 TB MDisks then create the minimum number of equal size LUNs). When adding more disks to a subsystem, consider adding the new MDisks to existing Storage Pools versus creating additional small Storage Pools. Scripts are available to restripe volume extents evenly across all MDisks in the Storage Pools if required. Go to the website https://www.ibm.com/developerworks/mydeveloperworks/groups/service/html/comm unityview?communityUuid=5cca19c3-f039-4e00-964a-c5934226abc1 and search for svctools. Maximum of 1024 worldwide node names (WWNNs) per cluster EMC DMX/SYMM, all HDS, and SUN/HP HDS clones use one WWNN per port. Each WWNN appears as a separate controller to the SVC. IBM, EMC Clariion, and HP use one WWNN per subsystem. Each WWNN appears as a single controller with multiple ports/WWPNs, for a maximum of 16 ports/WWPNs per WWNN. DS8000 using four of, or eight of, the 4 port HA cards Use port 1 and 3 or 2 and 4 on each card (does not matter for 8 Gb cards). This setup provides 8 or 16 ports for SVC use. Use 8 ports minimum up to 40 ranks. Use 16 ports, which is the maximum, for 40 or more ranks. DS4000/DS5000 EMC CLARiiON/CX Both systems have the preferred controller architecture, and SVC supports this configuration. Use a minimum of 4 ports, and preferably 8 or more ports up to maximum of 16 ports, so that more ports equate to more concurrent I/O that is driven by the SVC. Support for mapping controller A ports to Fabric A and controller B ports to Fabric B or cross-connecting ports to both fabrics from both controllers. The cross-connecting approach is preferred to avoid AVT/Trespass occurring if a fabric or all paths to a fabric fail. DS3400 Use a minimum of 4 ports.
85
XIV requirements and restrictions The use of XIV extended functions including snaps, thin-provisioning, synchronous replication, and LUN expansion is not LUNs presented to the SVC is not supported. A maximum of 511 LUNs from one XIV system can be mapped to an SVC clustered system. Full 15 module XIV recommendations 161 TB usable Use two interface host ports from each of the six interface modules. Use ports 1 and 3 from each interface module and zone these 12 ports with all SVC node ports. Create 48 LUNs of equal size, each of which is a multiple of 17 GB, and you will get 1632 GB approximately if using the entire full frame XIV with the SVC. Map LUNs to the SVC as 48 MDisks, and add all of them to the one XIV Storage Pool so that the SVC will drive the I/O to four MDisks/LUNs for each of the 12 XIV FC ports. This design provides a good queue depth on the SVC to drive XIV adequately. Six module XIV recommendations - 55 TB usable Use two interface host ports from each of the two active interface modules. Use ports 1 and 3 from interface modules 4 and 5. (Interface module 6 is inactive.) And, zone these four ports with all SVC node ports. Create 16 LUNs of equal size, each of which is a multiple of 17 GB, and you will get 1632 GB approximately if using the entire XIV with the SVC. Map LUNs to the SVC as 16 MDisks, and add all of them to the one XIV Storage Pool that the SVC will drive I/O to four MDisks/LUNs per each of the four XIV FC ports. This design provides a good queue depth on the SVC to drive XIV adequately. Nine module XIV recommendations - 87 TB usable: Use two interface host ports from each of the four active interface modules. Use ports 1 and 3 from interface modules 4, 5, 7, and 8. (Interface modules 6 and 9 are inactive.) Also, zone these eight ports with all of the SVC node ports. Create 26 LUNs of equal size, each of which is a multiple of 17 GB, and you will get 1632 GB approximately if using the entire XIV with the SVC. Map LUNs to SVC as 26 MDisks, and map add all of them to the one XIV Storage Pool, so that the SVC will drive I/O to three MDisks/LUNs on each of six ports and four MDisks/LUNs on the other two XIV FC ports. This design provides a useful queue depth on SVC to drive XIV adequately. Configure XIV host connectivity for the SVC clustered system: Create one host definition on XIV, and include all SVC node WWPNs. You can create clustered system host definitions (one per I/O Group), but the preceding method is easier. Map all LUNs to all SVC node WWPNs.
IP addresses. Note that if you plan to use the second Ethernet port on each node, it is possible to have two LAN segments. However, port 1 of every node must be in one LAN segment, and port 2 of every node must be in the other LAN segment. To maintain application uptime in the unlikely event of an individual SVC node failing, SVC nodes are always deployed in pairs (I/O Groups). If a node fails or is removed from the configuration, the remaining node operates in a degraded mode, but it is still a valid configuration. The remaining node operates in write-through mode, meaning that the data is written directly to the disk subsystem (the cache is disabled for the write). The uninterruptible power supply unit must be in the same rack as the node to which it provides power, and each uninterruptible power supply unit can only have one node connected. The FC SAN connections between the SVC node and the switches are optical fiber. These connections can run at either 2 Gbps, 4 Gbps, or 8 Gbps, depending on your SVC and switch hardware. The 2145-CG8, 2145-CF8, 2145-8A4, 2145-8G4, and 2145-8F4 SVC nodes auto-negotiate the connection speed with the switch. The SVC node ports must be connected to the FC fabric only. Direct connections between the SVC and the host, or the disk subsystem, are unsupported. Two SVC clustered systems cannot have access to the same LUNs within a disk subsystem. Configuring zoning such that two SVC clustered systems have access to the same LUNs (MDisks) can, and will likely, result in data corruption. The two nodes within an I/O Group can be co-located (within the same set of racks) or can be located in separate racks and separate rooms. See 3.3.6, Split-cluster system configuration on page 87 for more information about this topic. The SVC uses three MDisks as quorum disks for the clustered system. A best practice for redundancy is to have each quorum disk be located in a separate storage subsystem where possible. The current locations of the quorum disks can be displayed using the lsquorum command and relocated using the chquorum command.
87
2. ISL configuration. a. b. c. d. ISLs between SVC nodes Maximum distance similar to Metro Mirror distances Physical requirements similar to Metro Mirror requirements ISL distance extension with active and passive WDM devices
Figure 3-16 on page 88 shows an example of Split Cluster with ISL Configuration.
88
Use the split-cluster system configuration in conjunction with the volume mirroring option to realize an availability benefit. After volume mirroring has been configured, use the lscontrollerdependentvdisks command to validate volume mirrors reside on separate storage controllers. This will ensure that access to volumes is maintained in the event of the loss of a storage controller. When implementing a split-cluster system configuration, two of the three quorum disks can be co-located in the same room where the SVC nodes are located. However, the active quorum disk must reside in a separate room. This configuration ensures that a quorum disk is always available, even after a single site failure. For split-cluster system configuration, configure the SVC as follows: Site 1: Half of the SVC clustered system nodes + one quorum disk candidate Site 2: Half of the SVC clustered system nodes + one quorum disk candidate Site 3: Active Quorum disk When a Split Cluster configuration is used in conjunction with volume mirroring, this configuration provides a high availability solution that is tolerant of a failure at a single site. If either the primary or secondary site fails, the remaining sites can continue performing I/O operations. See Appendix C, SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines on page 899 for more information about Split Cluster Configurations.
89
Table 3-1 Extent size and maximum clustered system capacities Extent size 16 MB 32 MB 64 MB 128 MB 256 MB 512 MB 1,024 MB 2,048 MB 4,096 MB 8,192 MB Maximum clustered system capacity 64 TB 128 TB 256 TB 512 TB 1 PB 2 PB 4 PB 8 PB 16 PB 32 PB
There are several additional Storage Pool considerations: Maximum clustered system capacity is related to the extent size. 16 MB extent = 64 TB and doubles for each increment in extent size; for example, 32 MB = 128 TB. We strongly advise a minimum 128/256 MB. The IBM Storage Performance Council (SPC) benchmarks used a 256 MB extent. Pick the extent size and use that size for all Storage Pools. You cannot migrate volumes between Storage Pools with different extent sizes. However, you can use volume mirroring to create copies between Storage Pools with different extent sizes. Storage Pool reliability, availability, and serviceability (RAS) considerations. It might make sense to create multiple Storage Pools if you ensure a host only gets its volumes built from one of the Storage Pools. If the Storage Pool goes offline, it impacts only a subset of all of the hosts using the SVC. However, creating multiple Storage Pools can cause a high number of Storage Pools, approaching the SVC limits. If you do not isolate hosts to Storage Pools, create one large Storage Pool. Creating one large Storage Pool assumes that the physical disks are all the same size, speed, and RAID level. The Storage Pool goes offline if an MDisk is unavailable, even if the MDisk has no data on it. Do not put MDisks into a Storage Pool until needed. Create at least one separate Storage Pool for all the image mode volumes. Make sure that the LUNs that are given to the SVC have any host persistent reserves removed. Storage Pool performance considerations. It might make sense to create multiple Storage Pools if you are attempting to isolate workloads to separate disk spindles. Storage Pools with too few MDisks cause an MDisk overload, so it is better to have more spindle counts in a Storage Pool to meet workload requirements.
90
The Storage Pool and SVC cache relationship. SVC employs cache partitioning to limit the potentially negative effect that a poorly performing storage controller can have on the clustered system. The partition allocation size is defined based on the number of Storage Pools configured. This design protects against individual controller overloading and failures from consuming write cache and degrading performance of other Storage Pools in the clustered system. More details are discussed in 2.8.3, Cache on page 41. Table 3-2 shows the limit of the write cache data.
Table 3-2 Limit of the cache data Number of Storage Pools 1 2 3 4 5 or more Upper limit 100% 66% 40% 30% 25%
Consider the rule to be that no single partition can occupy more than its upper limit of cache capacity with write data. These limits are upper limits, and they are the points at which the SVC cache will start to limit incoming I/O rates for volumes created from the Storage Pool. If a particular partition reaches this upper limit, the net result is the same as a global cache resource that is full. That is, the host writes will be serviced on a one-out-one-in basis, because the cache destages writes to the back-end disks. However, only writes targeted at the full partition are limited. All I/O destined for other (non-limited) Storage Pools will continue as normal. Read I/O requests for the limited partition will also continue as normal. However, because the SVC is destaging write data at a rate that is obviously greater than the controller can sustain (otherwise the partition does not reach the upper limit), read response times are also likely to be impacted.
91
I/O Group considerations When you create a volume, it is associated with one node of an I/O Group. By default, every time that you create a new volume, it is associated with the next node using a round-robin algorithm. You can specify a preferred access node, which is the node through which you send I/O to the volume instead of using the round-robin algorithm. A volume is defined for an I/O Group. Even if you have eight paths for each volume, all I/O traffic flows only toward one node (the preferred node). Therefore, only four paths are really used by the IBM Subsystem Device Driver (SDD). The other four paths are used only in the case of a failure of the preferred node or when concurrent code upgrade is running. Creating image mode volumes Use image mode volumes when an MDisk already has data on it, from a non-virtualized disk subsystem. When an image mode volume is created, it directly corresponds to the MDisk from which it is created. Therefore, volume logical block address (LBA) x = MDisk LBA x. The capacity of image mode volumes defaults to the capacity of the supplied MDisk. When you create an image mode disk, the MDisk must have a mode of unmanaged and therefore does not belong to any Storage Pool. A capacity of 0 is not allowed. Image mode volumes can be created in sizes with a minimum granularity of 512 bytes, and they must be at least one block (512 bytes) in size. Creating managed mode volumes with sequential or striped policy When creating a managed mode volume with sequential or striped policy, you must use a number of MDisks containing extents that are free and of a size that is equal or greater than the size of the volume that you want to create. There might be sufficient extents available on the MDisk, but there might not be a contiguous block large enough to satisfy the request. Thin-Provisioned volume considerations When creating the Thin-Provisioned volume, you need to understand the utilization patterns by the applications or group users accessing this volume. You must take into consideration items such as the actual size of the data, the rate of creation of new data, modifying or deleting existing data, and so on. There are two operating modes for Thin-Provisioned volumes
Autoexpand volumes allocate storage from a Storage Pool on demand with minimal
user intervention required. However, a misbehaving application can cause a volume to expand until it has consumed all of the storage in a Storage Pool.
Non-autoexpand volumes have a fixed amount of storage assigned. In this case, the
user must monitor the volume and assign additional capacity when required. A misbehaving application can only cause the volume that it is using to fill up.
Depending on the initial size for the real capacity, the grain size and a warning level can be set. If a volume goes offline, either through a lack of available physical storage for autoexpand, or because a volume marked as non-expand had not been expanded in time, there is a danger of data being left in the cache until storage is made available. This situation is not a data integrity or data loss issue, but you must not rely on the SVC cache as a backup storage mechanism.
92
Important: Keep a warning level on the used capacity so that it provides adequate time to respond and provision more physical capacity. Warnings must not be ignored by an administrator. Use the autoexpand feature of the Thin-Provisioned volumes. The grain size allocation unit for the real capacity in the volume can be set as 32 KB, 64 KB, 128 KB, or 256 KB. A smaller grain size utilizes space more effectively, but it results in a larger directory map, which can reduce performance. Thin-Provisioned volumes require more I/Os because of directory accesses. For truly random workloads with 70% read and 30% write, a Thin-Provisioned volume will require approximately one directory I/O for every user I/O. The directory is two-way write-back-cached (just like the SVC fastwrite cache), so certain applications will perform better. Thin-Provisioned volumes require more CPU processing, so the performance per I/O Group can also be reduced. A Thin-Provisioned volume feature called zero detect provides clients with the ability to reclaim unused allocated disk space (zeros) when converting a fully allocated volume to a Thin-Provisioned volume using volume mirroring. Volume mirroring guidelines Create or identify two separate Storage Pools to allocate space for your mirrored volume. Allocate the Storage Pools containing the mirrors from separate storage controllers. If possible, use a Storage Pool with MDisks that share the same characteristics. Otherwise, the volume performance can be affected by the poorer performing MDisk.
93
Notes: Following is a list of the suggested number of paths per volume: (n+1 redundancy) With 2 HBA ports: zone HBA ports to SVC ports 1 to 2 for a total of 4 paths With 4 HBA ports: zone HBA ports to SVC ports 1 to 1 for a total of 4 paths Optional: (n+2 redundancy) With 4 HBA ports: zone HBA ports to SVC ports 1 to 2 for a total of 8 paths The term HBA port is used to describe the SCSI Initiator. The term SVC port is used to describe the SCSI target. The maximum number of host paths per volume is not to exceed 8. If a host has multiple HBA ports, each port must be zoned to a separate set of SVC ports to maximize high availability and performance. To configure greater than 256 hosts, you will need to configure the host to I/O Group mappings on the SVC. Each I/O Group can contain a maximum of 256 hosts, so it is possible to create 1024 host objects on an eight-node SVC clustered system. Volumes can only be mapped to a host that is associated with the I/O Group to which the volume belongs. Port masking You can use a port mask to control the node target ports that a host can access, which satisfies two requirements: As part of a security policy, to limit the set of WWPNs that are able to obtain access to any volumes through a given SVC port As part of a scheme to limit the number of logins with mapped volumes visible to a host multipathing driver (such as SDD) and thus limit the number of host objects configured without resorting to switch zoning The port mask is an optional parameter of the mkhost and chhost commands. The port mask is four binary bits. Valid mask values range from 0000 (no ports enabled) to 1111 (all ports enabled). For example, a mask of 0011 enables port 1 and port 2. The default value is 1111 (all ports enabled). The SVC supports connection to the Cisco MDS family and Brocade family. See the following website for the latest support information: http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html
94
Note: SVC 6.3 introduces a new property for the clustered system called layer. This property is used when there is a copy services partnership between an SVC and an IBM Storwize V7000. There are two layers: replication and storage. All SVC clustered systems are replication layer and cannot be changed, by default IBM Storwize V7000 is a storage layer and must be changed with the CLI command chsystem before use to make any copy services partnership with SVC. SVC Advanced Copy Services must apply the following guidelines.
FlashCopy guidelines
Consider these FlashCopy guidelines: Identify each application that must have a FlashCopy function implemented for its volume. FlashCopy is a relationship between volumes. Those volumes can belong to separate Storage Pools and separate storage subsystems. You can use FlashCopy for backup purposes by interacting with the Tivoli Storage Manager Agent, or for cloning a particular environment. Define which FlashCopy best fits your requirements: No copy, Full copy, Thin-Provisioned, or Incremental. Define which FlashCopy rate best fits your requirement in terms of performance and time to get the FlashCopy completed. The relationship of the background copy rate value to the attempted number of grains to be split per second is shown in Table 3-3. Define the grain size that you want to use. A grain is the unit of data represented by a single bit in the FlashCopy bitmap table. Larger grain sizes can cause a longer FlashCopy elapsed time and a higher space usage in the FlashCopy target volume. Smaller grain sizes can have the opposite effect. Remember that the data structure and the source data location can modify those effects. In an actual environment, check the results of your FlashCopy procedure in terms of the data copied at every run and in terms of elapsed time, comparing them to the new SVC FlashCopy results. Eventually, adapt the grain/second and the copy rate parameter to fit your environments requirements.
Table 3-3 Grain splits per second User percentage 1 - 10 11 - 20 21 - 30 31 - 40 41 - 50 51 - 60 61 - 70 71 - 80 81 - 90 Data copied per second 128 KB 256 KB 512 KB 1 MB 2 MB 4 MB 8 MB 16 Mb 32 MB 256 KB grain per second 0.5 1 2 4 8 16 32 64 128 64 KB grain per second 2 4 8 16 32 64 128 256 512
95
Figure 3-17 contains two redundant fabrics. Part of each fabric exists at the local clustered system and at the remote clustered system. There is no direct connection between the two fabrics. Technologies for extending the distance between two SVC clustered systems can be broadly divided into two categories: FC extenders SAN multiprotocol routers Due to the more complex interactions involved, IBM explicitly tests products of this class for interoperability with the SVC. The current list of supported SAN routers can be found in the supported hardware list on the SVC support website: http://www.ibm.com/storage/support/2145 IBM has tested a number of FC extenders and SAN router technologies with the SVC, which must be planned, installed, and tested so that the following requirements are met: The round-trip latency between sites must not exceed 80 ms (40 ms one-way). For Global Mirror, this limit allows a distance between the primary and secondary sites of up to 96
IBM System Storage SAN Volume Controller V6.3
8000 km (4970.96 miles) using a planning assumption of 100 km (62.13 miles) per 1 ms of round-trip link latency. The latency of long distance links depends upon the technology that is used to implement them. A point-to-point dark fiber-based link will typically provide a round-trip latency of 1ms per 100 km (62.13 miles) or better. Other technologies will provide longer round-trip latencies, which will affect the maximum supported distance. The configuration must be tested with the expected peak workloads. When Metro Mirror or Global Mirror is used, a certain amount of bandwidth will be required for SVC intercluster heartbeat traffic. The amount of traffic depends on how many nodes are in each of the two clustered systems. Figure 3-18 shows the amount of heartbeat traffic, in megabits per second, that is generated by various sizes of clustered systems.
These numbers represent the total traffic between the two clustered systems when no I/O is taking place to mirrored volumes. Half of the data is sent by one clustered system, and half of the data is sent by the other clustered system. The traffic will be divided evenly over all available intercluster links. Therefore, if you have two redundant links, half of this traffic will be sent over each link during fault-free operation. The bandwidth between sites must, at the least, sized to meet the peak workload requirements in addition to maintaining the maximum latency specified previously. The peak workload requirement must be evaluated by considering the average write workload over a period of one minute or less, plus the required synchronization copy bandwidth. With no synchronization copies active and no write I/O disks in Metro Mirror or Global Mirror relationships, the SVC protocols will operate with the bandwidth indicated in Figure 3-18. However, the true bandwidth required for the link can only be determined by considering the peak write bandwidth to volumes participating in Metro Mirror or Global Mirror relationships and adding to it the peak synchronization copy bandwidth. If the link between the sites is configured with redundancy so that it can tolerate single failures, the link must be sized so that the bandwidth and latency statements continue to be true even during single failure conditions. The configuration is tested to simulate the failure of the primary site (to test the recovery capabilities and procedures), including eventual failback to the primary site from the secondary. The configuration must be tested to confirm that any failover mechanisms in the intercluster links interoperate satisfactorily with the SVC. The FC extender must be treated as a normal link. The bandwidth and latency measurements must be made by, or on behalf of, the client. They are not part of the standard installation of the SVC by IBM. Make these
97
measurements during installation, and record the measurements. Testing must be repeated following any significant changes to the equipment providing the intercluster link.
98
If gmlinktolerance is disabled for the duration of the maintenance, it must be re-enabled after the maintenance is complete. Global Mirror volumes must have their preferred nodes evenly distributed between the nodes of the clustered systems. Each volume within an I/O Group has a preferred node property that can be used to balance the I/O load between nodes in that group. Figure 3-19 shows the correct relationship between volumes in a Metro Mirror or Global Mirror solution.
The capabilities of the storage controllers at the secondary clustered system must be provisioned to allow for the peak application workload to the Global Mirror volumes, plus the client-defined level of background copy, plus any other I/O being performed at the secondary site. The performance of applications at the primary clustered system can be limited by the performance of the back-end storage controllers at the secondary clustered system to maximize the amount of I/O that applications can perform to Global Mirror volumes. Do a complete review before using SATA for Metro Mirror or Global Mirror secondary volumes. Using a slower disk subsystem for the secondary volumes for high performance primary volumes can mean that the SVC cache might not be able to buffer all the writes, and flushing cache writes to SATA might slow I/O at the production site. Storage controllers must be configured to support the Global Mirror workload that is required of them. You can: dedicate storage controllers to only Global Mirror volumes; configure the controller to guarantee sufficient quality of service for the disks being used by Global Mirror; or ensure that physical disks are not shared between Global Mirror volumes and other I/O (for example, by not splitting an individual RAID array). MDisks within a Global Mirror storage pool must be similar in their characteristics (for example, RAID level, physical disk count, and disk speed). This requirement is true of all storage pools, but it is particularly important to maintain performance when using Global Mirror. When a consistent relationship is stopped, for example, by a persistent I/O error on the intercluster link, the relationship enters the consistent_stopped state. I/O at the primary site continues, but the updates are not mirrored to the secondary site. Restarting the relationship will begin the process of synchronizing new data to the secondary disk. While this synchronization is in progress, the relationship will be in the inconsistent_copying state. Therefore, the Global Mirror secondary volume will not be in a usable state until the copy has completed and the relationship has returned to a Consistent state. For this
Chapter 3. Planning and configuration
99
reason it is highly advisable to create a FlashCopy of the secondary volume before restarting the relationship. When started, the FlashCopy will provide a consistent copy of the data, even while the Global Mirror relationship is copying. If the Global Mirror relationship does not reach the Synchronized state (if, for example, the intercluster link experiences further persistent I/O errors), the FlashCopy target can be used at the secondary site for disaster recovery purposes. If you are planning to use an FCIP intercluster link, it is extremely important to design and size the pipe correctly. Example 3-2 shows a best-guess bandwidth sizing formula.
Example 3-2 WAN link calculation example
Amount of write data within 24 hours times 4 to allow for peaks Translate into MB/s to determine WAN link needed Example: 250 GB a day 250 GB * 4 = 1 TB 24 hours * 3600 secs/hr. = 86400 secs 1,000,000,000,000/ 86400 = approximately 12 MB/s Which means OC3 or higher is needed (155 Mbps or higher) If compression is available on routers or WAN communication devices, smaller pipelines might be adequate. Note that workload is probably not evenly spread across 24 hours. If there are extended periods of high data change rates, consider suspending Global Mirror during that time frame. If the network bandwidth is too small to handle the traffic, the application write I/O response times might be elongated. For the SVC, Global Mirror must support short-term Peak Write bandwidth requirements. Remember that SVC Global Mirror is much more sensitive to a lack of bandwidth than the DS8000. You will need to consider the initial sync and re-sync workload, as well. The Global Mirror partnerships background copy rate must be set to a value that is appropriate to the link and secondary back-end storage. The more bandwidth that you give to the sync and re-sync operation, the less workload can be delivered by the SVC for the regular data traffic. Do not propose Global Mirror if the data change rate will exceed the communication bandwidth or if the round-trip latency exceeds 80 - 120 ms. Greater than 80 ms round-trip latency requires SCORE/RPQ submission.
To move workload to rebalance a changed workload To migrate data from an older disk subsystem to SVC-managed storage To migrate data from one disk subsystem to another disk subsystem Because there are multiple data migration methods, choose the method that best fits your environment, your operating system platform, your kind of data, and your applications service level agreement. We can define data migration as belonging to three groups: Based on operating system Logical Volume Manager (LVM) or commands Based on special data migration software Based on the SVC data migration feature With data migration, apply the following guidelines: Choose which data migration method best fits your operating system platform, your kind of data, and your service level agreement. Check the interoperability matrix for the storage subsystem to which your data is being migrated: http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html Choose where you want to place your data after migration in terms of the Storage Pools related to a specific storage subsystem tier. Check whether a sufficient amount of free space or extents are available in the target Storage Pool. Decide if your data is critical and must be protected by a volume mirroring option or if it must be replicated in a remote site for disaster recovery. Prepare offline all of the zone and LUN masking/host mappings that you might need, to minimize downtime during the migration. Prepare a detailed operation plan so that you do not overlook anything at data migration time. Execute a data backup before you start any data migration. Data backup must be part of the regular data management process. You might want to use the SVC as a data mover to migrate data from a non-virtualized storage subsystem to another non-virtualized storage subsystem. In this case, you might have to add additional checks that are related to the specific storage subsystem to which you want to migrate. Be careful using slower disk subsystems for the secondary volumes for high performance primary volumes, because SVC cache might not be able to buffer all the writes and flushing cache writes to SATA might slow I/O at the production site.
101
3.4.1 SAN
The SVC now has many models: 2145-8F4, 2145-8G4, 2145-8A4, 2145-CF8 and 2145-CG8. All of them can connect to 2 Gbps, 4 Gbps, or 8 Gbps switches. From a performance point of view, it is better to connect the SVC to 8 Gbps switches. Correct zoning on the SAN switch will bring security and performance together. Implement a dual HBA approach at the host to access the SVC.
The SVC has a 4 GB, 8 GB, or 24 GB cache in the last 2145-CF8 and 2145-CG8 models and it has an advanced caching mechanism. The SVC is capable of providing automated performance optimizing of hotspots through the use of Solid State Drives (SSDs) and Easy Tier. The SVCs large cache and advanced cache management algorithms also allow it to improve upon the performance of many types of underlying disk technologies. The SVCs capability to manage, in the background, the destaging operations incurred by writes (in addition to still supporting full data integrity) has the potential to be particularly important in achieving good database performance. Depending upon the size, age, and technology level of the disk storage system, the total cache available in the SVC can be larger, smaller, or about the same as that associated with the disk storage. Because hits to the cache can occur in either the upper (SVC) or the lower (disk controller) level of the overall system, the system as a whole can take advantage of the larger amount of cache wherever it is located. Thus, if the storage control level of cache has the greater capacity, expect hits to this cache to occur, in addition to hits in the SVC cache. Also, regardless of their relative capacities, both levels of cache will tend to play an important role in allowing sequentially organized data to flow smoothly through the system. The SVC cannot increase the throughput potential of the underlying disks in all cases, because this depends upon both the underlying storage technology and the degree to which the workload exhibits hot spots or sensitivity to cache size or cache algorithms. IBM SAN Volume Controller 4.2.1 Cache Partitioning, REDP-4426, explains the SVCs cache partitioning capability: http://www.redbooks.ibm.com/abstracts/redp4426.html?Open
3.4.3 SVC
The SVC clustered system is scalable up to eight nodes, and the performance is nearly linear when adding more nodes into an SVC clustered system, until it becomes limited by other components in the storage infrastructure. Although virtualization with the SVC provides a great deal of flexibility, it does not diminish the necessity to have a SAN and disk subsystems that can deliver the desired performance. Essentially, SVC performance improvements are gained by having as many MDisks as possible, therefore creating a greater level of concurrent I/O to the back-end without overloading a single disk or array. Assuming that there are no bottlenecks in the SAN or on the disk subsystem, remember that specific guidelines must be followed when you are performing these tasks: Creating a Storage Pool Creating volumes Connecting or configuring hosts that must receive disk space from an SVC clustered system You can obtain more detailed information about performance and best practices for the SVC in SAN Volume Controller Best Practices and Performance Guidelines, SG24-7521: http://www.redbooks.ibm.com/abstracts/sg247521.html?Open
103
You can obtain more information about using the TotalStorage Productivity Center to monitor your storage subsystem in Monitoring Your Storage Subsystems with TotalStorage Productivity Center, SG24-7364: http://www.redbooks.ibm.com/abstracts/sg247364.html?Open See Chapter 10, SAN Volume Controller operations using the GUI on page 631, for detailed information about collecting performance statistics.
104
Chapter 4.
105
Note that you have full management control of the SVC regardless of which method you choose. IBM System Storage Productivity Center is supplied by default when you purchase your SVC cluster. If you already have a previously installed SVC cluster in your environment, it is possible that you are using the SVC Console (Hardware Management Console (HMC)). You can still use it together with IBM System Storage Productivity Center, but you can only log in to your SVC from one of them at a time. If you decide to manage your SVC cluster with the SVC CLI, it does not matter if you are using the SVC Console or IBM System Storage Productivity Center, because the SVC CLI is located on the cluster and accessed through Secure Shell (SSH), which can be installed anywhere.
106
Figure 4-2 shows the TCP/IP ports and services that are used by the SVC.
For more information about TCP/IP prerequisites, see Chapter 3, Planning and configuration on page 67 and also the IBM System Storage Productivity Center: Introduction and Planning Guide, SC23-8824. To assist you in starting an SVC initial configuration, Figure 4-3 shows a common flowchart that covers all of the types of management.
107
In the next sections, we describe each of the steps shown in Figure 4-3.
108
Tivoli Storage Productivity Center for Replication is pre-installed. An additional license is required. IBM System Storage DS Storage Manager 10.70 is available for you to optionally install on the System Storage Productivity Center server, or on a remote server. The DS Storage Manager 10.70 can manage the IBM DS3000, IBM DS4000, and IBM DS5000. With DS Storage Manager 10.70, when you use Tivoli Storage Productivity Center to add and discover a DS CIM Agent, you can launch the DS Storage Manager from the topology viewer, the Configuration Utility, or the Disk Manager of the Tivoli Storage Productivity Center. IBM Java 1.6 is pre-installed. IBM Java is pre-installed and supports DS Storage Manager 10.70. You do not need to download Java from Sun Microsystems. DS CIM Agent management commands. The DS CIM Agent management commands (DSCIMCLI) for 5.5.0.3 are pre-installed on the System Storage Productivity Center. SSPC supports SVC 6.1 and later code levels, as well as the IBM System Storage Storwize V7000. It also supports manual install of the 5.1 GUI (the SVC Console needed for SVC 5.1 or previous SVC releases is also available on the IBM website). With SVC 6.1 and later code levels, the GUI console is embedded in the SVC Cluster, so there is no longer a need to install any SVC software directly on the SSPC. IBM DB2 Enterprise Server Edition PuTTY (SSH client software) Figure 4-4 shows the product stack in the IBM System Storage Productivity Center Console 1.5.
109
IBM System Storage Productivity Center has all of the software components pre-installed and tested on a System xTM machine model IBM System Storage Productivity Center 2805-MC5 with Windows installed on it. All the software components installed on the IBM System Storage Productivity Center can be ordered and installed on hardware that meets or exceeds minimum requirements. For a detailed guide to the IBM System Storage Productivity Center, refer to IBM System Storage Productivity Center Software Installation and Users Guide, SC23-8823. For information pertaining to physical connectivity to the SVC, see Chapter 3, Planning and configuration on page 67.
4.2.2 SVC installation planning information for System Storage Productivity Center
Consider the following steps when planning the System Storage Productivity Center installation: Verify that the hardware and software prerequisites have been met. Determine the location of the rack where the System Storage Productivity Center is to be installed. Verify that the System Storage Productivity Center will be installed in line of sight to the SVC nodes. Verify that you have a keyboard, mouse, and monitor available to use. Determine the cabling required. Determine the network IP address. Determine the System Storage Productivity Center host name.
110
For detailed installation guidance, see IBM System Storage Productivity Center: Introduction and Planning Guide, SC23-8824: https://www-304.ibm.com/systems/support/supportsite.wss/supportresources?brandind= 5000033&familyind=5356448 Also see IBM Tivoli Storage Productivity Center IBM Tivoli Storage Productivity Center for Replication Installation and Configuration Guide, SC27-2337: http://http://www-01.ibm.com/support/docview.wss?rs=1181&uid=ssg1S7002597 Figure 4-5 shows the front view of the System Storage Productivity Center Console based on the 2805-MC5 hardware.
Figure 4-6 shows a rear view of System Storage Productivity Center Console based on the 2805-MC5 hardware.
111
Figure 4-7 SVC 8F2 node and SVC 8F4 node front and operator panel
Use Figure 4-8 for the SVC Node 2145-8G4 and 2145-8A4 models.
112
Use Figure 4-9 as a reference for the SVC Node 2145-CF8 model; the figure shows the CF8 model front panel.
113
SVC V6.1 and later code levels, introduces a new method for performing service tasks. In addition to being able to perform service tasks from the front panel, you can also service a node through an Ethernet connection using either a web browser or the command-line interface. An additional Service IP address for each node canister is required. For more details see 4.4.3, Configuring the Service IP Addresses on page 131 and 10.17, Service Assistant with the GUI on page 863.
114
4.3.2 Prerequisites
Ensure that the SVC nodes are physically installed and that Ethernet and Fibre Channel connectivity has been correctly configured. For information about physical connectivity to the SVC, see Chapter 3, Planning and configuration on page 67. Prior to configuring the cluster, ensure that the following information is available: License The license indicates whether the client is permitted to use FlashCopy, MetroMirror, or both. It also indicates how much capacity the client is licensed to virtualize. For IPv4 addressing Cluster IPv4 addresses - These addresses include one address for the cluster and another address for the service address. IPv4 subnet mask. Gateway IPv4 address. For IPv6 addressing Cluster IPv6 addresses - These addresses include one address for the cluster and another address for the service address. IPv6 prefix. Gateway IPv6 address. You must create a cluster to use the SAN Volume Controller virtualized storage. The first phase to create a cluster is performed from the front panel of the SAN Volume Controller. The second phase is performed from a web browser accessing the management GUI.
115
Figure 4-11 Cluster IPv4? and Cluster IPv6? options on the front panel display
If the New Cluster IPv4? or New Cluster IPv6? actions are displayed, move directly to step 5. If the New Cluster IPv4? or New Cluster IPv6? actions are not displayed, it means that this node is already a member of a cluster. a. Press and release the up or down button until Actions is displayed. b. Press and release the select button to return to the Main Options menu. c. Press and release the up or down button until Cluster: is displayed. The name of the cluster that the node belongs to is displayed on line 2 of the panel. In this case there are two options. a. If you want to delete this node from cluster: i. Press and release the up or down button until Actions is displayed. ii. Press and release the select button. iii. Press and release the up or down button until Remove Cluster? is displayed. iv. Press and hold the up button v. Press and release the select button. vi. Press and release the up or down button until Confirm remove? is displayed. vii. Press and release the select button. viii.Release the up button. ix. Then, release the up button, which deletes the cluster information from the node. Go back to step 1 on page 115 and start again. b. If you do not want this node to be removed from an existing cluster, review the situation and determine the correct nodes to include in the new cluster. 5. Press and release the select button to create the new cluster. 6. Press and release the select button again to modify the IP. 7. Use the up or down navigation buttons to change the value of the first field of the IP address to the value that has been chosen.
116
Notes: For IPv4, pressing and holding the up or down buttons will increment or decrease the IP address field by units of 10. The field value rotates from 0 to 255 with the down button, and from 255 to 0 with the up button. For IPv6, the address and the gateway address consist of eight 4-digit hexadecimal values. Enter the full address by working across a series of four panels to update each of the 4-digit hexadecimal values that make up the IPv6 addresses. The panels consist of eight fields, where each field is a 4-digit hexadecimal value. 8. Use the right navigation button to move to the next field. Use the up or down navigation buttons to change the value of this field. 9. Repeat step 7 for each of the remaining fields of the IP address. 10.When the last field of the IP address has been changed, press the select button. 11.Press the right arrow button: a. For IPv4, IPv4 Subnet: is displayed. b. For IPv6, IPv6 Prefix: is displayed. 12.Press the select button. 13.Change the fields for IPv4 Subnet in the same way that the IPv4 IP address fields were changed. There is only a single field for IPv6 Prefix. 14.When the last field of IPv4 Subnet/IPv6 Mask has been changed, press the select button. 15.Press the right navigation button: a. For IPv4, IPv4 Gateway: is displayed. b. For IPv6, IPv6 Gateway: is displayed. 16.Press the select button. 17.Change the fields for the appropriate Gateway in the same way that the IPv4/IPv6 address fields were changed. 18.When the changes to all of the Gateway fields have been made, press the select button. 19.To review the settings before creating the cluster, use the right and left buttons. Make any necessary changes, then use right and left buttons to Confirm Created?, and press the select button. 20.After you complete this task, the following information is displayed on the service display panel: Cluster: is displayed on line 1. A temporary, system-assigned cluster name that is based on the IP address is displayed on line 2. If the cluster is not created, Create Failed: is displayed on line 1 of the service display. Line 2 contains an error code. Refer to the error codes that are documented in IBM System Storage SAN Volume Controller: Service Guide, GC26-7901, to identify the reason why the cluster creation failed and the corrective action to take. After you have created the cluster on the front panel with the correct IP address format, you can finish the cluster configuration by accessing the management GUI, completing the Create Cluster wizard, and adding nodes to the cluster.
117
Important: At this time, do not repeat this procedure to add other nodes to the cluster. To add nodes to the cluster, follow the steps described in 9.9.2, Adding a node on page 527 and in 10.12.3, Adding a node to the cluster on page 804.
118
2. Enter the default superuser password: passw0rd (with a zero) and click Continue, as shown in Figure 4-13.
3. On the next page, read the license agreement carefully. To agree with it, select I agree with the terms in the license agreement and click Next as shown in Figure 4-14.
119
4. At the Name, Date, and Time window (Figure 4-15), fill in the following details: A Cluster Name (System Name): This name is case sensitive and can consist of A to Z, a to z, 0 to 9, and the underscore (_). It cannot start with a number. It has a minimum of one character and a maximum of 60 characters. A Time Zone: You can select the time zone for the cluster here. A Date and a Time: Here you can change the date and the time of your cluster. If you are using an Network Time Protocol (NTP) server, you can enter the IP address of the NTP server by selecting Set NTP Server IP Address. Click Next to confirm your changes.
120
5. The Change Date and Time Settings window appears to complete updates on the cluster; see Figure 4-16. When the task is completed, click Close.
6. Next, the System License window is displayed, as shown in Figure 4-17. To continue, fill out the fields for Virtualization Limit, FlashCopy Limit, Global and Metro Mirror and Real-Time Compression Limit for the number of Terabytes that are licensed. If you do not have a license for any of these features, leave the value at 0. Click Next.
7. The Configure Email Event Notification window is displayed as shown in Figure 4-18.
121
If you do not want to configure it or if you want to do it later, click Next and go to step 8 on page 125. To ensure your system continues to run smoothly, you can enable email event notifications. Email event notifications send messages about error, warning, or informational events and inventory reports to an email address of local or remote support personnel. Ensure that all the information is valid, or email notification is disabled. If you want to configure it, click Configure Email Event Notifications and a wizard appears. a. On the first page, shown in Figure 4-19, fill in the information required to enable IBM Support personnel to contact this person to assist with problem resolution (Contact Name, Email Reply Address, Machine Location and Phone). Ensure that all contact information is valid. Then, click Next.
b. On the next page, shown in Figure 4-20, configure at least one email server that is used by your site and optionally, enable inventory reporting. Enter a valid IP address and a server port for each server added. Ensure that the email servers are valid. Inventory reports allow IBM service personnel to proactively notify you of any known issues with your system. To activate it, enable inventory reporting and choose a Reporting Interval in this window.
122
c. Next, as shown on Figure 4-21, you can configure email addresses to receive notifications. It is a best practice to have one of the email addresses be a support user with the error event notification type enabled to notify IBM service personnel if an error condition occurs on your system. Ensure that all email addresses are valid.
d. The last window, Figure 4-21, is a summary of your Email Event Notification wizard. Click Finish to complete the setup.
123
e. The wizard is now closed and additional information has been added, as shown in Figure 4-23. You can edit or discard your changes from this window. Then, click Next.
Figure 4-23 Configure Email Event Notification window with configuration information
124
8. Next, you can add available nodes to your cluster; see Figure 4-24.
To complete this operation, click an empty node position to view the candidate nodes. Important: Keep in mind that you need to have at least two nodes by IO group. Add your available nodes in sequence. For an empty slot, select the node you want to add to your cluster using the drop-down list. Then change its name and click Add Node, as shown in Figure 4-25.
125
A pop-up window appears to inform you about the time required to add a node to the cluster; see Figure 4-26. If you want to add it, click the OK button.
The Add New Node window appears to complete the update on the cluster, as shown on Figure 4-27. When the task is completed, click Close.
After your node has been successfully added to the cluster, you have an updated view of the Figure 4-24 as shown in Figure 4-28.
126
Figure 4-28 Hardware window with a second node added to the cluster
When all your nodes have been added to your cluster, click Finish. 9. Several operations will be done to update the cluster configuration, as shown in Figure 4-29. When the task is completed, click Close.
10.Your cluster is now successfully created. However, there are several remaining tasks to be completed before you use the cluster, such as changing the default superuser password or defining an IP address for service. We guide you through these tasks in the following sections.
127
128
3. Right -click the superuser user and select Properties as shown in Figure 4-32.
129
5. Enter the new password twice and validate your change by clicking OK, as shown in Figure 4-34.
130
3. Select one node, then click the port you want to assign a service IP address; see Figure 4-37.
131
4. Depending on whether you have installed an IPv4 or an IPv6 cluster, there is other information to enter. For IPv4: Type an IPv4 address in the IP Address field. Type an IPv4 gateway in the Gateway field. Type an IPv4 Subnet Mask. For IPv6: Select the Show IPv6 Button Type an IPv6 prefix in the IPv6 Network Prefix field. The Prefix field can have a value of 0 to 127. Type an IPv6 address in the IP Address field. Type an IPv6 gateway in the Gateway field. After the information has been entered, click OK to confirm modification as shown in Figure 4-38.
4.4.4 Postrequisites
Perform the following steps to complete the SVC cluster configuration. We explain all of these steps in greater detail in Chapter 9, SAN Volume Controller operations using the
132
command-line interface on page 467, and in Chapter 10, SAN Volume Controller operations using the GUI on page 631. a. Configure SSH keys for the command line user, as shown in 4.5, Secure Shell overview on page 133. b. Configure user authentication and authorization. c. Set up event notifications and inventory reporting. d. Create the storage pools. e. Add an MDisk to the storage pool. f. Identify and create volumes. g. Create a map host objects map. h. Identify and configure FlashCopy mappings and Metro Mirror relationship. i. Back up configuration data.
133
To use the CLI, an SSH client must be installed on that system; the SSH key pair must be generated on the client system; and the clients SSH public key must be stored on the SVC clusters. The System Storage Productivity Center or other any workstation must have the freeware implementation of SSH-2 for Windows called PuTTY pre-installed. This software provides the SSH client function for users logged into the SVC Console that want to invoke the CLI to manage the SVC cluster.
4.5.1 Generating public and private SSH key pairs using PuTTY
Perform the following steps to generate SSH keys on the SSH client system: Start the PuTTY Key Generator to generate public and private SSH keys. From the client desktop, select Start Programs PuTTY PuTTYgen. 6. On the PuTTY Key Generator GUI window (Figure 4-39), generate the keys: a. Select SSH2 RSA. b. Leave the number of bits in a generated key value at 1024. c. Click Generate.
7. Move the cursor onto the blank area to generate the keys. To generate keys: The blank area indicated by the message is the large blank rectangle on the GUI inside the section of the GUI labeled Key. Continue to move the mouse pointer over the blank area until the progress bar reaches the far right. This action generates random characters to create a unique key pair.
134
8. After the keys are generated, save them for later use: a. Click Save public key, as shown in Figure 4-40.
b. You are prompted for a name (for example, pubkey) and a location for the public key (for example, C:\Support Utils\PuTTY). Click Save. If another name or location is chosen, ensure that a record of the name or location is kept, because the name and location of this SSH public key must be specified in the steps that are documented in 4.5.2, Uploading the SSH public key to the SVC cluster on page 136. Tip: The PuTTY Key Generator saves the public key with no extension, by default. Use the string pub in naming the public key, for example, pubkey, to easily differentiate the SSH public key from the SSH private key. c. In the PuTTY Key Generator window, click Save private key. d. You are prompted with a warning message, as shown in Figure 4-41. Click Yes to save the private key without a passphrase.
135
e. When prompted, enter a name (for example, icat) and location for the private key (for example, C:\Support Utils\PuTTY). Click Save. We suggest that you use the default name icat.ppk, because in SVC clusters running on versions prior to SVC 5.1, this key has been used for icat application authentication and must have this default name. Private key extension: The PuTTY Key Generator saves the private key with the PPK extension. 9. Close the PuTTY Key Generator GUI. 10.Navigate to the directory where the private key was saved (for example, C:\Support Utils\PuTTY).
2. From the Create a User window, insert the user ID name that you want to create and the password. Also select the access level that you want to assign to your user (remember that the Security Administrator is the maximum level) and choose the location where you want to upload the SSH pub key file from you have created for this user, as shown in Figure 4-43. Click Ok. 136
IBM System Storage SAN Volume Controller V6.3
3. You have completed your user creation process and uploaded the users SSH public key that will be paired later with the users private.ppk key, as described in 4.5.3, Configuring the PuTTY session for the CLI on page 137. Figure 4-44 shows the successful upload of the SSH admin key.
You have now completed the basic setup requirements for the SVC cluster using the SVC cluster web interface.
137
Perform these steps to configure the PuTTY session on the SSH client system: 1. From the System Storage Productivity Center Windows desktop, select Start Programs PuTTY PuTTY to open the PuTTY Configuration GUI window. 2. In the PuTTY Configuration window (Figure 4-45), from the Category pane on the left, click Session, if it is not selected. Tip: The items selected in the Category pane affect the content that appears in the right pane.
3. In the right pane, under the Specify the destination you want to connect to section, select SSH. Under the Close window on exit section, select Only on clean exit, which ensures that if there are any connection errors, they will be displayed on the users window. 4. From the Category pane on the left side of the PuTTY Configuration window, click Connection SSH to display the PuTTY SSH Configuration window, as shown in Figure 4-46.
138
5. In the right pane, in the Preferred SSH protocol version section, select 2. 6. From the Category pane on the left side of the PuTTY Configuration window, select Connection SSH Auth. 7. On Figure 4-47, in the right pane, in the Private key file for authentication: field under the Authentication Parameters section, either browse to or type the fully qualified directory path and file name of the SSH client private key file created earlier (for example, C:\Support Utils\PuTTY\icat.PPK). See Figure 4-47. 8. You can skip the Connection SSH Auth. part if you created the user only with password authentication and no ssh key.
139
9. From the Category pane on the left side of the PuTTY Configuration window, click Session. 10.In the right pane, follow these steps, as shown in Figure 4-48: a. Under the Load, save, or delete a stored session section, select Default Settings, and click Save. b. For the Host Name (or IP address), type the IP address of the SVC cluster. c. In the Saved Sessions field, type a name (for example, SVC) to associate with this session. d. Click Save.
140
You can now either close the PuTTY Configuration window or leave it open to continue. Tips: When you want to enter the Host Name or IP address in Putty, insert your SVC user followed by @ previous to your Host Name or IP address as shown previously. this way you will not have to enter your user each time you want to access your SVC cluster. Notice that if you havent created an ssh key, you will be prompted for the password you set for the user. Normally, output that comes from the SVC is wider than the default PuTTY window size. Change your PuTTY window appearance to use a font with a character size of 8. To change, click the Appearance item in the Category tree, as shown in Figure 4-48, and then click Font. Choose a font with a character size of 8.
141
4. If this is the first time that the PuTTY application is being used since you generated and uploaded the SSH key pair, a PuTTY Security Alert window with a prompt opens stating that there is a mismatch between the private and public keys, as shown in Figure 4-50. Click Yes, which invokes the CLI.
5. As shown in Example 4-1, the private key used in this PuTTY session is now authenticated against the public key that was uploaded to the SVC cluster.
Example 4-1 Authenticating
Using username "admin". Authenticating with public key "rsa-key-20100909" IBM_2145:ITSO_SVC1:admin> You have now completed the tasks that are required to configure the CLI for SVC administration from the SVC Console. You can close the PuTTY session.
142
143
Ethernet adapter IPv6: Connection-specific IP Address. . . . . Subnet Mask . . . . IP Address. . . . . IP Address. . . . . Default Gateway . . DNS . . . . . . . . . . Suffix . . . . . . . . . . . . . . . . . . . . . . . . . . : : : : : :
To update a cluster, follow these steps: 1. Select Configuration Network, as shown in Figure 4-51.
144
2. Select Management IP Addresses, then click port 1 of one of the node as shown in Figure 4-52.
3. In the window that is shown in Figure 4-53, follow these steps: a. Select Show IPv6. b. Type an IPv6 prefix in the IPv6 Network Prefix field. The Prefix field can have a value of 0 to 127. c. Type an IPv6 address in the IP Address field. d. Type an IPv6 gateway in the Gateway field. e. Click OK.
145
5. The Change Management task is launched on the server as shown in Figure 4-55. Click Close when the task is completed.
6. Test the IPv6 connectivity using the ping command from a cmd.exe session on your local workstation (as shown in Example 4-3).
Example 4-3 Testing IPv6 connectivity to the SVC cluster
C:\Documents and Settings\Administrator>ping 2001:0610:0000:0000:0000:0000:0000:119 Pinging 2001:610::119 from 2001:610::115 with 32 bytes of data: Reply Reply Reply Reply from from from from 2001:610::119: 2001:610::119: 2001:610::119: 2001:610::119: time=3ms time<1ms time<1ms time<1ms
Ping statistics for 2001:610::119: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
146
Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 3ms, Average = 0ms 7. Test the IPv6 connectivity to the cluster using a compatible IPv6 and SVC web browser on your local workstation; see Figure 4-56.
Figure 4-56 Testing IPv6 SVC GUI access using a compatible web browser
Tip: To access an IPv6 address in a web browser, you need to put this IP between square brackets as shown at the top in Figure 4-56. 8. Finally, remove the IPv4 address in the SVC GUI accessing the same windows as shown in Figure 4-53, and validate this change by clicking OK.
147
148
Chapter 5.
Host configuration
In this chapter we describe the basic host configuration procedures that are required to attach supported hosts to the IBM System Storage SAN Volume Controller (SVC).
149
5.1 Host attachment overview for IBM System Storage SAN Volume Controller
The IBM System Storage SAN Volume Controller supports a wide range of host types (both IBM and non-IBM), thereby making it possible to consolidate storage in an open systems environment into a common pool of storage. The storage pool can then be utilized and managed more efficiently as a single entity from a central point on the SAN. The benefits of storage virtualization have been discussed in more depth earlier in this book The ability to consolidate storage for attached open systems hosts provides the following benefits: Unified, easier storage management Increased utilization rate of the installed storage capacity Advanced Copy Services functions are offered across storage systems from different vendors Only one kind of multi-path driver to consider when attaching hosts
150
151
In this figure, the optical distance between SVC Node 1 and Host 2 is just over 40 km. In order to avoid latencies leading to performance impact it is recommended, to avoid ISL hops whenever possible. That is, in an optimal setup the servers are connected to the same SAN switch as the SVC nodes. Remember these limits when connecting host servers to an SVC: Up to 256 hosts per I/O Group, which results in a total of 1,024 hosts per cluster. Note that if the same host is connected to multiple I/O Groups of a cluster, it counts as a host in each of these groups. A total of 512 distinct configured host worldwide port names (WWPNs) are supported per I/O Group. This limit is the sum of the FC host ports and the host iSCSI names (an internal WWPN is generated for each iSCSI name) associated with all of the hosts that are associated with a single I/O Group. Access from a server to an SVC cluster through the SAN fabric is defined by means of switch zoning. Consider these rules for zoning hosts with the SVC: Homogeneous HBA port zones Switch zones containing HBAs must contain HBAs from similar host types and similar HBAs in the same host. For example, AIX and NT hosts must be in separate zones and QLogic and Emulex adapters must also be in separate zones. Important: A configuration that breaches this rule is unsupported because it can introduce instability to the environment.
152
HBA to SVC port zones Place each host HBA in a separate zone along with one or two SVC ports. If two ports then use one from each node in the I/O Group. Do not place more than two SVC ports in a zone with an HBA, because this will result in more than the recommended number of paths as seen from the host multipath driver. Recommended number of paths per volume: (n+1 redundancy) With 2 HBA ports: zone HBA ports to SVC ports 1 to 2 for a total of 4 paths With 4 HBA ports: zone HBA ports to SVC ports 1 to 1 for a total of 4 paths Optional: (n+2 redundancy) With 4 HBA ports: zone HBA ports to SVC ports 1 to 2 for a total of 8 paths Note: Here the term HBA port is used to describe the SCSI Initiator and SVC port is used to describe the SCSI target. Maximum host paths per LU For any volume, the number of paths through the SAN from the SVC nodes to a host must
not exceed eight. For most configurations, four paths to an I/O Group (four paths to each
volume that is provided by this I/O Group) are sufficient. Note: The maximum number of host paths per LU is not to exceed 8. Balanced Host Load across HBA ports To obtain the best performance from a host with multiple ports, ensure that each host port is zoned with a separate group of SVC ports. Balanced Host Load across SVC ports To obtain the best overall performance of the subsystem and to prevent overloading, the workload to each SVC port must be equal. You can achieve this balance by zoning approximately the same number of host ports to each SVC port. Figure 5-3 on page 154 shows an overview of a configuration where servers contain two single port HBAs each. Attempt to distribute the attached hosts equally between two logical sets per I/O Group. Connect hosts from each set to the same group of SVC ports. This port group includes exactly one port from each SVC node in the I/O Group. The zoning defines the correct connections. The port groups are defined as follows: Hosts in host set one of an I/O Group are always zoned to the P1 and P4 ports on both nodes, for example, N1/N2 of I/O Group zero. Hosts in host set two of an I/O Group are always zoned to the P2 and P3 ports on both nodes of an I/O Group. You can create aliases for these port groups (per I/O Group): Fabric A: IOGRP0_PG1 N1_P1;N2_P1,IOGRP0_PG2 N1_P3;N2_P3 Fabric B: IOGRP0_PG1 N1_P4;N2_P4,IOGRP0_PG2 N1_P2;N2_P2 Create host zones by always using the host port WWPN, plus the PG1 alias for hosts in the first host set. Always use the host port WWPN, plus the PG2 alias for hosts from the
153
second host set. If a host has to be zoned to multiple I/O Groups, simply add the PG1 or PG2 aliases from the specific I/O Groups to the host zone. Using this schema provides four paths to one I/O Group for each host and helps to maintain an equal distribution of host connections on the SVC ports. Figure 5-3 shows an overview of this host zoning schema.
When possible, use the minimum number of paths necessary to achieve a sufficient level of redundancy. For SVC environment, no more than four paths per I/O Group are required to accomplish this. Remember that all paths must be managed by the multipath driver on the host side. If we assume a server is connected through four ports to the SVC, each volume is seen through eight paths. With 125 volumes mapped to this server, the multipath driver has to support handling up to 1000 active paths (8 x 125). You can find configuration and operational details about the IBM Subsystem Device Driver (SDD) Storage Multipath Subsystem Device Driver Users Guide, at the following website: http://ibm.com/support/docview.wss?uid=ssg1S7000303 For hosts using four HBAs/ports with eight connections to an I/O Group, use the zoning schema that is shown in Figure 5-4 on page 155. You can combine this schema with the previous four-path zoning schema.
154
155
5.3 iSCSI
iSCSI is a block-level protocol that encapsulates SCSI commands into TCP/IP packets and, thereby, leverages an existing IP network instead of requiring FC HBAs and SAN fabric infrastructure. The iSCSI standard is defined by RFC 3720. iSCSI connectivity is a software feature that is provided by the SVC code. iSCSI attached hosts can either utilize a single network connection or multiple network connections. Important: Only host attachment to SVC via iSCSI is supported. SVC-to-storage connections are not supported. Each SVC node is equipped with two on-board ethernet network interface cards (NICs), capable of operating at a link speed of 10, 100 or 1000 Mbps. Both of these can be used to carry iSCSI traffic. Each nodes NIC numbered 1 is used as primary SVC cluster management port. For optimal performance achievement it as advisable to use a 1Gb ethernet connection between SVC and iSCSI attached hosts when using the SVC nodes on-board NICs. Starting with the SVC 2145-CG8, an optional 10 Gbps 2-port ethernet adapter (Feature Code #5700) is available. The required 10 Gbps shortwave SFPs are available as FC #5711. If the 10 GbE option is installed, no internal SSDs can be installed. The 10 GbE option is solely to be used for iSCSI traffic.
Software initiator: available for most operating systems, for example, AIX, Linux, Windows Hardware initiator: implemented as a network adapter with integrated iSCSI processing unit, also known as iSCSI HBA Supported operating systems for iSCSI host attachment as well as supported iSCSI HBAs can be found at the following web sites: SVC v6.3 Support Matrix http://ibm.com/support/docview.wss?uid=ssg1S1003907 SVC Information Center http://publib.boulder.ibm.com/infocenter/svc/ic/index.jsp An iSCSI target refers to a storage resource that is located on an iSCSI server, or, to be more precise, to one of potentially many instances of iSCSI nodes running on that server as a target.
157
A host accessing SVC volumes via iSCSI connectivity utilizes one or more ethernet adapters or iSCSI HBAs to connect to the ethernet network. Both on-board ethernet ports of an SVC node can be configured for iSCSI. If iSCSI is used for host attachment, it is advisable to dedicate ethernet port one for SVC management and port two for iSCSI use. By doing so, port two can get connected to a separate network segment or VLAN for iSCSI, as SVC does not support the use of VLAN tagging to separate management and iSCSI traffic. Note that ethernet link aggregation (port trunking) or channel bonding for the SVC nodes ethernet ports is not supported for the 1 Gbps ports in this release. For each SVC node, that is, for each instance of an iSCSI target node in the SVC node, two IPv4 and two IPv6 addresses or iSCSI network portals can be defined.
158
To set up your host server for use as an iSCSI software-based initiator with SAN Volume Controller volumes, perform the following steps (the CLI is used in this example): 1. Set up your SAN Volume Controller cluster for iSCSI. a. Select a set of IPv4 or IPv6 addresses for the ethernet ports on the nodes that are in the I/O groups that will use the iSCSI volumes. b. Configure the node ethernet ports on each SVC node in the clustered system with the svctask cfgportip command. c. Verify that you have configured the node and the clustered systems ethernet ports correctly by reviewing the output of the svcinfo lsportip command and svcinfo lssystemip command. d. Use the svctask mkvdisk command to create volumes on the SAN Volume Controller clustered system. e. Use the svctask mkhost command to create a host object on the SAN Volume Controller. It defines the hosts iSCSI initiator to which the volumes are to be mapped. f. Use the svctask mkvdiskhostmap command to map the volume to the host object in the SAN Volume Controller. 2. Set up your host server. a. Ensure that you have configured your IP interfaces on the server. b. Make sure your iSCSI HBA is ready to use or install the software for the iSCSI software-based initiator on the server if needed. c. On the host server, run the configuration methods for iSCSI so that the host server iSCSI initiator logs in to the SAN Volume Controller clustered system and discovers the SAN Volume Controller volumes. The host then creates host devices for the volumes. 3. After the host devices are created, you can use them with your host applications.
5.3.6 Authentication
Authentication of hosts is optional; by default, it is disabled. The user can choose to enable Challenge Handshake Authentication Protocol (CHAP) or CHAP authentication, which involves sharing a CHAP secret between the cluster and the host. If the correct key is not provided by the host, the SVC will not allow it to perform I/O to volumes. The cluster can also be assigned a CHAP secret.
159
clustered ethernet port. A clustered ethernet port consists of one physical ethernet port on
each node in the cluster. The clustered ethernet port contains configuration settings that are shared by all of these ports. Figure 5-7 shows an example of an iSCSI target node failover. It gives a simplified overview of what happens during a planned or unplanned node restart in an SVC I/O Group. This example refers to SVC nodes with no optional 10GbE iSCSI adapter installed. 1. During normal operation, one iSCSI node target node instance is running on each SVC node. All of the IP addresses (IPv4/IPv6) belonging to this iSCSI target, including the management addresses if the node acts as the configuration node, are presented on the two ports (P1/P2) of a node. 2. During a restart of an SVC node (N1), the iSCSI initiator, including all of its network portal (IPv4/IPv6) IP addresses defined on Port1/Port2 and the management (IPv4/IPv6) IP addresses (if N1 acted as the configuration node), will fail over to Port1/Port2 of the partner node within the I/O Group, that is, node N2. An iSCSI initiator running on a server will execute a reconnect to its iSCSI target, that is, the same IP addresses presented now by a new node of the SVC cluster. 3. As soon as the node (N1) has finished its restart, the iSCSI target node (including its IP addresses) running on N2 will fail back to N1. Again, the iSCSI initiator running on a server will execute a reconnect to its iSCSI target. The management addresses will not fail back. N2 will remain in the role of the configuration node for this cluster.
160
161
For a detailed description about how to use these commands, see Chapter 9, SAN Volume Controller operations using the command-line interface on page 467. The parameters for remote services (ssh and Web services) will remain associated with the cluster object. During an SVC code upgrade the configuration settings for the clustered system will be applied to the node ethernet port 1. For iSCSI-based access, using redundant network connections, and separating iSCSI traffic by using a dedicated network or VLAN, will prevent any NIC, switch, or target port failure from compromising the host servers access to the volumes. As both on-board ethernet ports of an SVC node can be configured for iSCSI, it is advisable to dedicate ethernet port 1 for SVC management and port 2 for iSCSI usage. By doing so, port 2 can be connected to a dedicated network segment or VLAN for iSCSI. As SVC does not support the use of VLAN tagging to separate management and iSCSI traffic, it would be an option, to assign the according LAN switch port to a dedicated VLAN in order to separate SVC management and iSCSI traffic.
162
7. Perform the logical configuration on the SAN Volume Controller to define the host, volumes, and host mapping. 8. Run cfgmgr to discover and configure the SVC volumes. The following sections detail the current support information. It is vital that you check the websites that are listed regularly for any updates.
Perform the following steps to configure your host system to use the fast fail and dynamic tracking attributes: 1. Issue the following command to set the FC SCSI I/O Controller Protocol Device to each adapter: chdev -l fscsi0 -a fc_err_recov=fast_fail The preceding command was for adapter fscsi0. Example 5-1 on page 163 shows the command for both adapters on our test system running AIX 5L V5.3.
163
#chdev -l fscsi1 -a fc_err_recov=fast_fail fscsi1 changed 2. Issue the following command to enable dynamic tracking for each FC device: chdev -l fscsi0 -a dyntrk=yes The preceding example command was for adapter fscsi0. Example 5-2 shows the command for both adapters on our test system running AIX 5L V5.3.
Example 5-2 Enable dynamic tracking
Note: The fast fail and dynamic tracking attributes do not persist through an adapter delete and reconfigure. Thus, if the adapters are deleted and then configured back into the system, these attributes will be lost and will need to be reapplied.
#lsdev -Cc adapter |grep fcs fcs0 Available 1Z-08 FC Adapter fcs1 Available 1D-08 FC Adapter
You can display the worldwide port number (WWPN), along with other attributes including firmware level, by using the command shown in Example 5-4. Note that the WWPN is represented as Network Address.
Example 5-4 FC host adapter settings and WWPN
U0.1-P2-I4/Q1
FC Adapter
Part Number.................00P4494 EC Level....................A Serial Number...............1E3120A68D Manufacturer................001E Device Specific.(CC)........2765 FRU Number.................. 00P4495 Network Address.............10000000C932A7FB ROS Level and ID............02C03951 Device Specific.(Z0)........2002606D Device Specific.(Z1)........00000000 Device Specific.(Z2)........00000000 Device Specific.(Z3)........03000909 Device Specific.(Z4)........FF401210 Device Specific.(Z5)........02C03951 Device Specific.(Z6)........06433951
164
PLATFORM SPECIFIC Name: fibre-channel Model: LP9002 Node: fibre-channel@1 Device Type: fcp Physical Location: U0.1-P2-I4/Q1
165
Note: For AIX hosts, use the Subsystem Device Driver Path Control Module (SDDPCM) as the multipath software over the legacy Subsystem Device Driver (SDD). Although still supported, a discussion of SDD is beyond the scope of this publication. For information regarding SDD, see Multipath Subsystem Device Driver Users Guide, GC52-1309.
SDDPCM installation
Download the appropriate version of SDDPCM and install using the standard AIX installation procedure. The latest SDDPCM software versions are available at the following website: http://ibm.com/support/entry/portal/Downloads/Hardware/System_Storage/Storage_soft ware/Other_software_products/System_Storage_Multipath_Subsystem_Device_Driver/ Check the driver readme file and make sure your AIX system meets all prerequisites. Example 5-5 shows the appropriate version of SDDPCM downloaded into the /tmp/sddpcm directory. From here, we extract it and initiate the inutoc command, which generates a dot.toc (.toc) file that is needed by the installp command prior to installing SDDPCM. Finally, we initiate the installp command, which installs SDDPCM onto this AIX host.
Example 5-5 Installing SDDPCM on AIX
# ls -l total 3232 -rw-r----1 root system 1648640 Jul 15 13:24 devices.sddpcm.61.rte.tar # tar -tvf devices.sddpcm.61.rte.tar -rw-r----- 271001 449628 1638400 Oct 31 12:16:23 2007 devices.sddpcm.61.rte # tar -xvf devices.sddpcm.61.rte.tar x devices.sddpcm.61.rte, 1638400 bytes, 3200 media blocks. # inutoc . # ls -l total 6432 -rw-r--r-1 root system 531 Jul 15 13:25 .toc -rw-r----1 271001 449628 1638400 Oct 31 2007 devices.sddpcm.61.rte -rw-r----1 root system 1648640 Jul 15 13:24 devices.sddpcm.61.rte.tar # installp -ac -d . all Example 5-6 shows the lslpp command that can be used to check the version of SDDPCM currently installed.
Example 5-6 Checking SDDPCM device driver
2.2.0.0 2.2.0.0
COMMITTED COMMITTED
IBM SDD PCM for AIX V61 IBM SDD PCM for AIX V61
Enabling the SDDPCM web interface is described in 5.12, Using SDDDSM, SDDPCM, and SDD web interface on page 223.
# lscfg -vl fcs* |egrep fcs|Network fcs1 U0.1-P2-I4/Q1 FC Adapter Network Address.............10000000C932A865 Physical Location: U0.1-P2-I4/Q1 fcs2 U0.1-P2-I5/Q1 FC Adapter Network Address.............10000000C94C8C1C
IBM_2145:ITSO-CLS2:admin>svcinfo lshost Atlantic id 8 name Atlantic port_count 2 type generic mask 1111 iogrp_count 4 WWPN 10000000C94C8C1C node_logged_in_count 2 state active WWPN 10000000C932A865 node_logged_in_count 2 state active IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Atlantic id name SCSI_id vdisk_id wwpn vdisk_UID 8 Atlantic 0 14 10000000C94C8C1C 6005076801A180E90800000000000060
vdisk_name Atlantic0001
167
Atlantic0002 Atlantic0003
# lsdev -Cc disk hdisk0 Available hdisk1 Available hdisk2 Available hdisk3 Available hdisk4 Available hdisk5 Available
16 Bit LVD SCSI Disk Drive 16 Bit LVD SCSI Disk Drive 16 Bit LVD SCSI Disk Drive MPIO FC 2145 MPIO FC 2145 MPIO FC 2145
The mkvg command can now be used to create a Volume Group with the three newly configured hdisks, as shown in Example 5-11.
Example 5-11 Running the mkvg command
# mkvg -y itsoaixvg hdisk3 0516-1254 mkvg: Changing the PVID in the ODM. itsoaixvg # mkvg -y itsoaixvg1 hdisk4 0516-1254 mkvg: Changing the PVID in the ODM. itsoaixvg1 # mkvg -y itsoaixvg2 hdisk5 0516-1254 mkvg: Changing the PVID in the ODM. itsoaixvg2 The lspv output now shows the new Volume Group label on each of the hdisks that were included in the Volume Groups, as seen in Example 5-12.
Example 5-12 Showing the vpath assignment into the Volume Group
# pcmpath query adapter Active Adapters :2 Adpt# 0 1 Name fscsi1 fscsi2 State NORMAL NORMAL Mode ACTIVE ACTIVE Select 407 425 Errors 0 0 Paths 6 6 Active 6 6
The pcmpath query device command displays the current state of the adapters. In Example 5-14, we can see the path State and Mode for each of the defined hdisks. The status that both adapters are showing as optimal with State=NORMAL and Mode=ACTIVE. Additionally, an asterisk (*) displayed next to paths indicates inactive paths that are configured to the non-preferred SVC nodes in the IO Group.
Example 5-14 SDDPCM commands that are used to check the availability of the devices
# pcmpath query device Total Devices : 3 DEV#: 3 DEVICE NAME: hdisk3 TYPE: 2145 ALGORITHM: Load Balance SERIAL: 6005076801A180E90800000000000060 ========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi1/path0 OPEN NORMAL 152 0 1* fscsi1/path1 OPEN NORMAL 48 0 2* fscsi2/path2 OPEN NORMAL 48 0 3 fscsi2/path3 OPEN NORMAL 160 0 DEV#: 4 DEVICE NAME: hdisk4 TYPE: 2145 ALGORITHM: Load Balance SERIAL: 6005076801A180E90800000000000061 ========================================================================== Path# Adapter/Path Name State Mode Select Errors 0* fscsi1/path0 OPEN NORMAL 37 0 1 fscsi1/path1 OPEN NORMAL 66 0 2 fscsi2/path2 OPEN NORMAL 71 0 3* fscsi2/path3 OPEN NORMAL 38 0 DEV#: 5 DEVICE NAME: hdisk5 TYPE: 2145 ALGORITHM: Load Balance SERIAL: 6005076801A180E90800000000000062 ========================================================================== Path# Adapter/Path Name State Mode Select Errors
Chapter 5. Host configuration
169
0 1* 2* 3 #
66 38 38 70
0 0 0 0
5.4.9 Creating and preparing volumes for use with AIX V6.1 and SDDPCM
The itsoaixvg Volume Group is created using hdisk3. A logical volume is created using the Volume Group. Then, the testlv1 file system is created and mounted on the /testlv1 mount point, as shown in Example 5-15.
Example 5-15 Host system new Volume Group and file system configuration
# lsvg -o itsoaixvg2 itsoaixvg1 itsoaixvg rootvg # crfs -v jfs2 -g itsoaixvg -a size=3G File system created successfully. 3145428 kilobytes total disk space. New File System size is 6291456 # lsvg -l itsoaixvg itsoaixvg: LV NAME TYPE LPs loglv00 jfs2log 1 fslv00 jfs2 384 #
-m /itsoaixvg -p rw -a agblksize=4096
PPs 1 384
PVs 1 1
170
5. After the capacity of the volume has been expanded, AIX will need to update its configured capacity. To initiate the capacity update on AIX, use the chvg -g vg_name command, where vg_name is the Volume Group the expanded volume resides in. If AIX does not return any messages, it means that the command was successful and the volume changes in this Volume Group have been saved. If AIX cannot see any changes in the volumes, it will return an explanatory message. 6. Display the new AIX configured capacity using the lspv hdisk command, again the capacity will be shown in the TOTAL PPs field in MBs.
171
most manufacturers driver readme files, you will find instructions for the Windows registry parameters that have to be set for the HBA driver.
173
the parallel operation of multiple vendors storage systems on the same host without interfering each other, as the MPIO instance only interacts with that storage system the DSM is provided for. MPIO is not installed with the Windows operating system by default. Instead, storage vendors must pack the MPIO drivers with their own DSM. IBM Subsystem Device Driver DSM (SDDDSM) is the IBM multipath I/O solution that is based on Microsoft MPIO technology. It is a device-specific module specifically designed to support IBM storage devices on Windows Server 2003 and Windows Server 2008 (R2) servers. The intention of MPIO is to achieve better integration of multipath storage with the operating system. It also allows the use of multi-pathing in the SAN infrastructure during the boot process for SAN boot hosts.
To check which levels are available, go to the website: http://ibm.com/support/docview.wss?uid=ssg1S7001350#WindowsSDDDSM To download SDDDSM, go to the website: http://ibm.com/support/docview.wss?uid=ssg1S4000350#SVC After you have downloaded the appropriate archive (zip file) from the URL above, extract it to your local hard drive and launch setup.exe to install SDDDSM. A command prompt window will appear, as shown in Figure 5-9. Confirm the installation by entering Y.
174
After the setup has completed, enter Y again to confirm the reboot request, shown in Figure 5-10
After the reboot, the SDDDSM installation is complete. You can verify the installation completion in Device Manager, because the SDDDSM device will appear (Figure 5-11 on page 175), and the SDDDSM tools will have been installed (Figure 5-12 on page 176).
175
IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Diomede id name SCSI_id vdisk_id vdisk_name wwpn 0 Diomede 0 20 Diomede_0001 210000E08B0541BC 6005076801A180E9080000000000002B 0 Diomede 1 21 Diomede_0002 210000E08B0541BC 6005076801A180E9080000000000002C 0 Diomede 2 22 Diomede_0003 210000E08B0541BC 6005076801A180E9080000000000002D
vdisk_UID
Perform the following steps to use the devices on your Windows Server 2008 R2 host: 1. Click Start, and click Run. 2. Enter the diskmgmt.msc command, and click OK. The Disk Management window opens. 3. Select Action, and click Rescan Disks (Figure 5-13).
176
4. The SVC disks will now appear in the Disk Management window (Figure 5-14 on page 177).
After you have assigned the SVC disks, they are also available in Device Manager. The three assigned drives are represented by SDDDSM/MPIO as IBM-2145 Multipath disk devices in the Device Manager (Figure 5-15).
177
5. To check that the disks are available, select Start All Programs Subsystem Device Driver DSM, and click Subsystem Device Driver DSM (Figure 5-16). The SDDDSM Command Line Utility will appear.
Figure 5-16 Windows Server 2008 R2 Subsystem Device Driver DSM utility
6. Enter the datapath query device command and press Enter (Example 5-17). This command will display all of the disks and the available paths, including their states.
Example 5-17 Windows Server 2008 R2 SDDDSM command-line utility
Microsoft Windows [Version 6.0.6001] Copyright (c) 2006 Microsoft Corporation. 178
IBM System Storage SAN Volume Controller V6.3
DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E9080000000000002B ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0 1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 1429 0 2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 1456 0 3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E9080000000000002C ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 1520 0 1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 0 0 2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0 3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 1517 0 DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E9080000000000002D ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 27 0 1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 1396 0 2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 1459 0 3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0 C:\Program Files\IBM\SDDDSM> SAN zoning: When following the SAN zoning guidance, we get this result, using one volume and a host with two HBAs, (number of volumes) x (number of paths per I/O Group per HBA) x (number of HBAs) = 1 x 2 x 2 = four paths. 7. Right-click the disk in Disk Management, and select Online to place the disk online (Figure 5-17).
179
8. Repeat step 7 for all of your attached SVC disks. 9. Right-click one disk again, and select Initialize Disk (Figure 5-18).
10.Mark all of the disks that you want to initialize, and click OK (Figure 5-19).
11.Right-click the unallocated disk space, and select New Simple Volume (Figure 5-20).
180
12.The New Simple Volume Wizard window opens. Click Next. 13.Enter a disk size, and click Next (Figure 5-21).
Figure 5-22 Windows Server 2008 R2: New Simple Volume Chapter 5. Host configuration
181
16.Click Finish, and repeat this step for every SVC disk on your host system (Figure 5-24).
182
A volume, that is defined to be in a FlashCopy, Metro Mirror, or Global Mirror mapping on the SVC cannot be expanded unless the host mapping is removed. This means that the FlashCopy, Metro Mirror, or Global Mirror on that volume has to be stopped before it is possible to expand the volume. Important: If you want to expand a logical drive in a extended partition in Windows Server 2003, apply the Hotfix from KB841650, which is available from the Microsoft Knowledge Base at this website: http://support.microsoft.com/kb/841650/ Use the updated Diskpart version for Windows Server 2003, which is available from the Microsoft Knowledge Base at this website: http://support.microsoft.com/kb/923076/ If the volume is part of a Microsoft Cluster (MSCS), Microsoft recommends to shut down all but one MSCS cluster nodes. Applications in the resource accessing the volume to be expanded should be stopped as well before expanding the volume. Applications running in other resources can continue to run. After expanding the volume, start the application and the resource, and then restart the other nodes in the MSCS. To expand a volume in use on a Windows Server host, the Windows DiskPart utility will be used. To start DiskPart, select Start Run, and enter DiskPart. Diskpart was developed by Microsoft to ease administration of storage on Windows hosts. It is a command-line interface which you can use to manage disks, partitions, and volumes by using scripts or direct input on the command line. You can list disks and volumes, select them, and after selecting them, get more detailed information, create partitions, extend volumes, and more. For more information on diskpart, see the Microsoft website: http://www.microsoft.com Further information on expanding partitions of a cluster shared disk is available at the following website: http://support.microsoft.com/kb/304736 An example of how to expand a volume on a Windows Server 2003 host, where the volume is a volume from the SVC, is shown in the following discussion. To list a volume size, use the svcinfo lsvdisk <VDisk_name> command. This command gives this information for the Senegal_bas0001 before expanding the volume. Here, we can see that the capacity is 10 GB, and also what the vdisk_UID is. To find on what vpath this volume is on the Windows Server 2003 host, we use the datapath query device SDD command on the Windows host (Figure 5-25). We can see that the serial 6005076801A180E9080000000000000F of Disk1 on the Windows host (Figure 5-25) matches the volume ID of Senegal_bas0001. To see the size of the volume on the Windows host we use Disk Manager, as shown in Figure 5-25.
183
This window shows that the volume size is 10 GB. To expand the volume on the SVC, we use the svctask expandvdisksize command to increase the capacity on the volume. In this example, we expand the volume by 1 GB (Example 5-18).
Example 5-18 svctask expandvdisksize command
IBM_2145:ITSO-CLS2:admin>svctask expandvdisksize -size 1 -unit gb Senegal_bas0001 IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk Senegal_bas0001 id 7 name Senegal_bas0001 IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name MDG_0_DS45 capacity 11.0GB type striped formatted yes mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 6005076801A180E9080000000000000F throttling 0 preferred_node_id 3 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 184
copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_0_DS45 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 11.00GB real_capacity 11.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize To check that the volume has been expanded, we use the svcinfo lsvdisk command. In Example 5-18, we can see that the Senegal_bas0001 volume has been expanded to 11 GB in capacity. After performing a Disk Rescan in Windows, you will see the new unallocated space in Windows Disk Management, as shown in Figure 5-26.
This window shows that Disk1 now has 1 GB unallocated new capacity. To make this capacity available for the file system, use the following commands, as shown in Example 5-19. diskpart list volume select volume Starts DiskPart in a DOS prompt Shows you all available volumes Selects the volume to expand
185
Displays details for the selected volume, including the unallocated capacity Extends the volume to the available unallocated space
C:\>diskpart Microsoft DiskPart version 5.2.3790.3959 Copyright (C) 1999-2001 Microsoft Corporation. On computer: SENEGAL DISKPART> list volume Volume ### ---------Volume 0 Volume 1 Volume 2 Ltr --C S D Label ----------SVC_Senegal Fs ----NTFS NTFS Type ---------Partition Partition DVD-ROM Size ------75 GB 10 GB 0 B Status --------Healthy Healthy Healthy Info -------System
DISKPART> select volume 1 Volume 1 is the selected volume. DISKPART> detail volume Disk ### -------* Disk 1 Status ---------Online Size ------11 GB Free ------1020 MB Dyn --Gpt ---
Readonly : No Hidden : No No Default Drive Letter: No Shadow Copy : No DISKPART> extend DiskPart successfully extended the volume. DISKPART> detail volume Disk ### -------* Disk 1 Status ---------Online Size ------11 GB Free ------0 B Dyn --Gpt ---
Readonly : No Hidden : No No Default Drive Letter: No Shadow Copy : No After extending the volume, the detail volume command shows that there is no free capacity on the volume anymore. The list volume command shows the file system size. The Disk Management window also shows the new disk size; see Figure 5-27.
186
The example here is referred to as a Windows Basic Disk. Dynamic disks can be expanded by expanding the underlying SVC volume. The new space will appear as unallocated space at the end of the disk. In this case, you do not need to use the DiskPart tool. Instead, you can use Windows Disk Management functions to allocate the new space. Expansion works irrespective of the volume type (simple, spanned, mirrored, and so on) on the disk. Dynamic disks can be expanded without stopping I/O in most cases. Important: Never try to upgrade your Basic Disk to Dynamic Disk or vice versa without backing up your data, because this operation is disruptive for the data due to a change in the position of the logical block address (LBA) on the disks.
187
Figure 5-25 on page 184 shows the Disk Manager before removing the disk. We will remove Disk 1. To find the correct volume information, we find the Serial/UID number using SDD (Example 5-20).
Example 5-20 Removing SVC disk from the Windows server
DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E9080000000000000F ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 1471 0 1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0 2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0 3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 1324 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E90800000000000010 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 20 0 1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 94 0 2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 55 0 3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0 DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E90800000000000011 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 100 0 1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 0 0 2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0 3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 69 0
Knowing the Serial/UID of the volume and the host name Senegal, we find the host mapping to remove by using the lshostvdiskmap command on the SVC, and then we remove the actual host mapping (Example 5-21).
Example 5-21 Finding and removing the host mapping
IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Senegal id name SCSI_id vdisk_id vdisk_name wwpn 1 Senegal 0 7 Senegal_bas0001 210000E08B89B9C0 6005076801A180E9080000000000000F 1 Senegal 1 8 Senegal_bas0002 210000E08B89B9C0 6005076801A180E90800000000000010 1 Senegal 2 9 Senegal_bas0003 210000E08B89B9C0 6005076801A180E90800000000000011
vdisk_UID
IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Senegal id name SCSI_id vdisk_id vdisk_name wwpn 1 Senegal 1 8 Senegal_bas0002 210000E08B89B9C0 6005076801A180E90800000000000010 1 Senegal 2 9 Senegal_bas0003 210000E08B89B9C0 6005076801A180E90800000000000011
vdisk_UID
Here, we can see that the volume is removed from the server. On the server, we then perform a disk rescan in Disk Management, and we now see that the correct disk (Disk1) has been removed, as shown in Figure 5-28.
SDDDSM also shows us that the status for all paths to Disk1 has changed to CLOSE, because the disk is not available (Example 5-22 on page 190).
189
DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E9080000000000000F ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 CLOSE NORMAL 1471 0 1 Scsi Port2 Bus0/Disk1 Part0 CLOSE NORMAL 0 0 2 Scsi Port3 Bus0/Disk1 Part0 CLOSE NORMAL 0 0 3 Scsi Port3 Bus0/Disk1 Part0 CLOSE NORMAL 1324 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E90800000000000010 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 20 0 1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 124 0 2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 72 0 3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0 DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E90800000000000011 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 134 0 1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 0 0 2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0 3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 82 0 The disk (Disk1) is now removed from the server. However, to remove the SDDDSM information of the disk, the server has to be rebooted at a convenient time.
190
For more information about the CLI, see Chapter 9, SAN Volume Controller operations using the command-line interface on page 467.
191
Create a free pool of volumes and a reserved pool of volumes on the SAN Volume Controller.
5.7.2 System requirements for the IBM System Storage hardware provider
Ensure that your system satisfies the following requirements before you install IBMVSS and Virtual Disk Service software on the Windows operating system: SAN Volume Controller with FlashCopy enabled. IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service (VDS) software.
192
5. Accept the license agreement on the next screen, then the Choose Destination Location window opens (Figure 5-30). Click Next to accept the default directory where the setup program will install the files, or click Change to select another directory. Click Next.
7. The next window is asking to select a CIM server, that is the SVC. Unlike for older SVC versions the config node is providing the CIM service on the cluster IP address. Select either the correct one of the automatically discovered CIM servers, or select Enter the CIM Server address manually, and click Next (Figure 5-32 on page 194).
193
8. The Enter CIM Server Details window opens. Enter the following information in the fields (Figure 5-33): a. The CIM Server Address field is propagated with the URL according to the CIM Server address chosen in the previous step. b. In the CIM User field, type the user name that the IBMVSS software will use to gain access to the SVC. c. In the CIM Password field, type the password for the SVC user name provided in the previous step and click Next.
9. In the next window, click Finish. If necessary, the InstallShield Wizard prompts you to restart the system (Figure 5-34 on page 195).
194
Additional information: If these settings change after installation, you can use the ibmvcfg.exe tool to update the Microsoft Volume Shadow Copy and Virtual Disk Services software with the new settings. If you do not have the CIM Agent server, port, or user information, contact your CIM Agent administrator.
C:\Users\Administrator>vssadmin list providers vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool (C) Copyright 2001-2005 Microsoft Corp. Provider name: 'Microsoft Software Shadow Copy provider 1.0' Provider type: System Provider Id: {b5946137-7b9f-4925-af80-51abd60b20d5} Version: 1.0.0.7
195
Provider name: 'IBM System Storage Volume Shadow Copy Service Hardware Provider' Provider type: Hardware Provider Id: {d90dd826-87cf-42ce-a88d-b32caa82025b} Version: 4.2.1.0816 If you are able to successfully perform all of these verification tasks, it means that the IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software was successfully installed on the Windows server.
IBM_2145:ITSO-CLS2:admin>svctask mkhost -name VSS_FREE -hbawwpn 5000000000000000 -force Host, id [2], successfully created 2. Create a virtual host for the reserved pool of volumes. You can use the default name VSS_RESERVED or specify another name. Associate the host with the WWPN 5000000000000001 (14 zeroes); see Example 5-25.
Example 5-25 Creating an mkhost for the reserved pool
IBM_2145:ITSO-CLS2:admin>svctask mkhost -name VSS_RESERVED -hbawwpn 5000000000000001 -force Host, id [3], successfully created 3. Map the logical units (volumes) to the free pool of volumes. The volumes cannot be mapped to any other hosts. If you already have volumes created for the free pool of volumes, you must assign the volumes to the free pool. 4. Create host mappings between the volumes selected in step 3 and the VSS_FREE host to add the volumes to the free pool. Alternatively, you can use the ibmvcfg add command to add volumes to the free pool; see Example 5-26 on page 197.
196
IBM_2145:ITSO-CLS2:admin>svctask mkvdiskhostmap -host VSS_FREE msvc0001 Virtual Disk to Host map, id [0], successfully created IBM_2145:ITSO-CLS2:admin>svctask mkvdiskhostmap -host VSS_FREE msvc0002 Virtual Disk to Host map, id [1], successfully created 5. Verify that the volumes have been mapped. If you do not use the default WWPNs 5000000000000000 and 5000000000000001, you must configure the IBM System Storage hardware provider with the WWPNs; see Example 5-27.
Example 5-27 Verify hosts
IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap VSS_FREE id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID 2 VSS_FREE 0 10 msvc0001 5000000000000000 6005076801A180E90800000000000012 2 VSS_FREE 1 11 msvc0002 5000000000000000 6005076801A180E90800000000000013
C:\Program Files\IBM\Hardware Provider for VSS-VDS>ibmvcfg.exe IBM System Storage VSS Provider Configuration Tool Commands ---------------------------------------ibmvcfg.exe <command> <command arguments> Commands: /h | /help | -? | /? showcfg listvols <all|free|unassigned> add <volume esrial number list> (separated by spaces) rem <volume serial number list> (separated by spaces) Configuration: set user <CIMOM user name> set password <CIMOM password> set trace [0-7] set trustpassword <trustpassword> set truststore <truststore location> set usingSSL <YES | NO> set vssFreeInitiator <WWPN> set vssReservedInitiator <WWPN> set FlashCopyVer <1 | 2> (only applies to ESS) set cimomPort <PORTNUM> set cimomHost <Hostname> set namespace <Namespace>
197
set targetSVC <svc_cluster_ip> set backgroundCopy <0-100> Table 5-2 lists the available commands.
Table 5-2 Available ibmvcfg.util commands Command ibmvcfg showcfg ibmvcfg set username <username> ibmvcfg set password <password> Description This lists the current settings. This sets the user name to access the SAN Volume Controller Console. This sets the password of the user name that will access the SAN Volume Controller Console. This specifies the IP address of the SAN Volume Controller on which the volumes are located when volumes are moved to and from the free pool with the ibmvcfg add and ibmvcfg rem commands. The IP address is overridden if you use the -s flag with the ibmvcfg add and ibmvcfg rem commands. This sets the background copy rate for FlashCopy. This specifies whether to use Secure Sockets Layer protocol to connect to the SAN Volume Controller Console. This specifies the SAN Volume Controller Console port number. The default value is 5999. This sets the name of the server where the SAN Volume Controller Console is installed. This specifies the namespace value that the Master Console is using. The default value is \root\ibm. This specifies the WWPN of the host. The default value is 5000000000000000. Modify this value only if there is a host already in your environment with a WWPN of 5000000000000000. Example ibmvcfg showcfg ibmvcfg set username Dan
198
Description This specifies the WWPN of the host. The default value is 5000000000000001. Modify this value only if there is a host already in your environment with a WWPN of 5000000000000001. This lists all volumes, including information about the size, location, and host mappings. This lists all volumes, including information about the size, location, and host mappings. This lists the volumes that are currently in the free pool. This lists the volumes that are currently not mapped to any hosts. This adds one or more volumes to the free pool of volumes. Use the -s parameter to specify the IP address of the SAN Volume Controller where the volumes are located. The -s parameter overrides the default IP address that is set with the ibmvcfg set targetSVC command. This removes one or more volumes from the free pool of volumes. Use the -s parameter to specify the IP address of the SAN Volume Controller where the volumes are located. The -s parameter overrides the default IP address that is set with the ibmvcfg set targetSVC command.
ibmvcfg listvols
ibmvcfg listvols
199
3. Install the supported HBA driver/firmware and upgrade the kernel if required. 4. Connect the Linux server FC host adapters to the switches. 5. Configure the switches (zoning) if needed. 6. Install SDD for Linux, as described in 5.8.5, Multipathing in Linux on page 201. 7. Configure the host, volumes, and host mapping in the SAN Volume Controller. 8. Rescan for LUNs on the Linux server to discover the volumes that were created on the SVC.
200
2. Rebuild the RAM disk that is associated with the kernel being used by using one of the following commands: If you are running on a SUSE Linux Enterprise Server operating system, run the mk_initrd command. If you are running on a Red Hat Enterprise Linux operating system, run the mkinitrd command, and then restart.
Installing SDD
This section describes how to install SDD for older distributions. Before performing these steps, always check for the currently supported levels, as described in 5.8.2, Configuration information on page 200. The cat /proc/scsi/scsi command displayed in Example 5-29 shows the devices that the SCSI driver has probed. In our configuration, we have two HBAs installed in our server, and we configured the zoning to access our volume from four paths.
Example 5-29 cat /proc/scsi/scsi command example
[root@diomede sdd]# cat /proc/scsi/scsi Attached devices: Host: scsi4 Channel: 00 Id: 00 Lun: 00 Vendor: IBM Model: 2145 Type: Unknown Host: scsi5 Channel: 00 Id: 00 Lun: 00 Vendor: IBM Model: 2145 Type: Unknown [root@diomede sdd]#
Rev: 0000 ANSI SCSI revision: 04 Rev: 0000 ANSI SCSI revision: 04
The rpm -ivh IBMsdd-1.6.3.0-5.i686.rhel4.rpm command installs the package, as shown in Example 5-30.
Example 5-30 rpm command example
[root@Palau sdd]# rpm -ivh IBMsdd-1.6.3.0-5.i686.rhel4.rpm Preparing... ########################################### [100%] 1:IBMsdd ########################################### [100%] Added following line to /etc/inittab: srv:345:respawn:/opt/IBMsdd/bin/sddsrv > /dev/null 2>&1 [root@Palau sdd]# To manually load and configure SDD on Linux, use the service sdd start command (SUSE Linux users can use the sdd start command). If you are not running a supported kernel, you will get an error message. If your kernel is supported, you see an OK success message, as shown in Example 5-31 on page 202.
201
[root@Palau sdd]# sdd start Starting IBMsdd driver load: [ Issuing killall sddsrv to trigger respawn... Starting IBMsdd configuration: [ OK OK ] ]
Issue the cfgvpath query command to view the name and serial number of the volume that is configured in the SAN Volume Controller, as shown in Example 5-32.
Example 5-32 cfgvpath query example
[root@Palau ~]# cfgvpath query RTPG command = a3 0a 00 00 00 00 00 00 08 20 00 00 total datalen=52 datalen_str=0x00 00 00 30 RTPG succeeded: sd_name=/dev/sda df_ctlr=0 /dev/sda ( 8, 0) host=0 ch=0 id=0 lun=0 vid=IBM pid=2145 serial=60050768018201bee000000000000035 lun_id=60050768018201bee000000000000035 ctlr_flag=1 ctlr_nbr=1 df_ctlr=0 RTPG command = a3 0a 00 00 00 00 00 00 08 20 00 00 total datalen=52 datalen_str=0x00 00 00 30 RTPG succeeded: sd_name=/dev/sdb df_ctlr=0 /dev/sdb ( 8, 16) host=0 ch=0 id=1 lun=0 vid=IBM pid=2145 serial=60050768018201bee000000000000035 lun_id=60050768018201bee000000000000035 ctlr_flag=1 ctlr_nbr=0 df_ctlr=0 RTPG command = a3 0a 00 00 00 00 00 00 08 20 00 00 total datalen=52 datalen_str=0x00 00 00 30 RTPG succeeded: sd_name=/dev/sdc df_ctlr=0 /dev/sdc ( 8, 32) host=1 ch=0 id=0 lun=0 vid=IBM pid=2145 serial=60050768018201bee000000000000035 lun_id=60050768018201bee000000000000035 ctlr_flag=1 ctlr_nbr=0 df_ctlr=0 RTPG command = a3 0a 00 00 00 00 00 00 08 20 00 00 total datalen=52 datalen_str=0x00 00 00 30 RTPG succeeded: sd_name=/dev/sdd df_ctlr=0 /dev/sdd ( 8, 48) host=1 ch=0 id=1 lun=0 vid=IBM pid=2145 serial=60050768018201bee000000000000035 lun_id=60050768018201bee000000000000035 ctlr_flag=1 ctlr_nbr=1 df_ctlr=0 [root@Palau ~]# The cfgvpath command configures the SDD vpath devices, as shown in Example 5-33.
Example 5-33 cfgvpath command example
[root@Palau ~]# cfgvpath c--------- 1 root root 253, 0 Jun 5 WARNING: vpatha path sda has WARNING: vpatha path sdb has WARNING: vpatha path sdc has WARNING: vpatha path sdd has Writing out new configuration to file [root@Palau ~]#
09:04 /dev/IBMsdd already been configured. already been configured. already been configured. already been configured. /etc/vpath.conf
The configuration information is saved by default in the /etc/vpath.conf file. You can save the configuration information to a specified file name by entering the following command: cfgvpath -f file_name.cfg 202
IBM System Storage SAN Volume Controller V6.3
Issue the chkconfig command to enable SDD to run at system startup: chkconfig sdd on To verify the setting, enter the following command: chkconfig --list sdd This verification is shown in Example 5-34.
Example 5-34 sdd run level example
[root@Palau sdd]# chkconfig --list sdd sdd 0:off 1:off 2:on [root@Palau sdd]#
3:on
4:on
5:on
6:off
If necessary, you can disable the startup option by entering this command: chkconfig sdd off Run the datapath query commands to display the online adapters and the paths to the adapters. Notice that the preferred paths are used from one of the nodes, that is, path 0 and path 2. Path 1 and path 3 connect to the other node and are used as alternate or backup paths for high availability, as shown in Example 5-35.
Example 5-35 datapath query command example
[root@Palau ~]# datapath query adapter Active Adapters :2 Adpt# Name State Mode 0 Host0Channel0 NORMAL ACTIVE 1 Host1Channel0 NORMAL ACTIVE [root@Palau ~]# [root@Palau ~]# datapath query device Total Devices : 1 Select 1 0 Errors 0 0 Paths 2 2 Active 0 0
DEV#: 0 DEVICE NAME: vpatha TYPE: 2145 POLICY: Optimized Sequential SERIAL: 60050768018201bee000000000000035 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Host0Channel0/sda CLOSE NORMAL 1 0 1 Host0Channel0/sdb CLOSE NORMAL 0 0 2 Host1Channel0/sdc CLOSE NORMAL 0 0 3 Host1Channel0/sdd CLOSE NORMAL 0 0 [root@Palau ~]# SDD has three path-selection policy algorithms: Failover only (fo): All I/O operations for the device are sent to the same (preferred) path unless the path fails because of I/O errors. Then, an alternate path is chosen for subsequent I/O operations. Load balancing (lb): The path to use for an I/O operation is chosen by estimating the load on the adapter to which each path is attached. The load is a function of the number of I/O operations currently in process. If multiple paths have the same load, a path is chosen at
203
random from those paths. Load-balancing mode also incorporates failover protection. The load-balancing policy is also known as the optimized policy. Round-robin (rr): The path to use for each I/O operation is chosen at random from paths that were not used for the last I/O operation. If a device has only two paths, SDD alternates between the two paths. You can dynamically change the SDD path-selection policy algorithm by using the datapath set device policy SDD command. You can see the SDD path-selection policy algorithm that is active on the device when you use the datapath query device command. Example 5-35 on page 203 shows that the active policy is optimized, which means that the SDD path-selection policy algorithm active is Optimized Sequential. Example 5-36 shows the volume information from the SVC command-line interface.
Example 5-36 svcinfo redhat1
IBM_2145:ITSOSVC42A:admin>svcinfo lshost linux2 id 6 name linux2 port_count 2 type generic mask 1111 iogrp_count 4 WWPN 210000E08B89C1CD node_logged_in_count 2 state active WWPN 210000E08B054CAA node_logged_in_count 2 state active IBM_2145:ITSOSVC42A:admin> IBM_2145:ITSOSVC42A:admin>svcinfo lshostvdiskmap linux2 id name SCSI_id vdisk_id wwpn vdisk_UID 6 linux2 0 33 210000E08B89C1CD 60050768018201BEE000000000000035 IBM_2145:ITSOSVC42A:admin> IBM_2145:ITSOSVC42A:admin>svcinfo lsvdisk linux_vd1 id 33 name linux_vd1 IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 1 mdisk_grp_name MDG0 capacity 1.0GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name 204
vdisk_name linux_vd1
vdisk_UID 60050768018201BEE000000000000035 throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite udid 0 fc_map_count 0 IBM_2145:ITSOSVC42A:admin>
[root@Palau ~]# fdisk /dev/vpatha Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): m Command action a toggle a bootable flag b edit bsd disklabel c toggle the dos compatibility flag d delete a partition l list known partition types m print this menu n add a new partition o create a new empty DOS partition table p print the partition table q quit without saving changes s create a new empty Sun disklabel t change a partition's system id u change display/entry units v verify the partition table w write table to disk and exit x extra functionality (experts only) Command (m for help): n Command action e extended p primary partition (1-4) e Partition number (1-4): 1 First cylinder (1-1011, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-1011, default 1011): Using default value 1011 Command (m for help): w
Chapter 5. Host configuration
205
The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. [root@Palau ~]# 2. Create a file system on the vpath, as shown in Example 5-38.
Example 5-38 mkfs command example
[root@Palau ~]# mkfs -t ext3 /dev/vpatha mke2fs 1.35 (28-Feb-2004) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 131072 inodes, 262144 blocks 13107 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=268435456 8 block groups 32768 blocks per group, 32768 fragments per group 16384 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376 Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 27 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@Palau ~]# 3. Create the mount point, and mount the vpath drive, as shown in Example 5-39.
Example 5-39 Mount point
[root@Palau ~]# mkdir /itsosvc [root@Palau ~]# mount -t ext3 /dev/vpatha /itsosvc 4. The drive is now ready for use. The df command shows us the mounted disk /itsosvc, and the datapath query command shows that four paths are available; see Example 5-40.
Example 5-40 Display mounted drives
[root@Palau ~]# df Filesystem 1K-blocks /dev/mapper/VolGroup00-LogVol00 74699952 /dev/hda1 101086 none 1033136 /dev/vpatha 1032088 [root@Palau ~]#
Used Available Use% Mounted on 2564388 13472 0 34092 68341032 82395 1033136 945568 4% 15% 0% 4% / /boot /dev/shm /itsosvc
206
Total Devices : 1
DEV#: 0 DEVICE NAME: vpatha TYPE: 2145 POLICY: Optimized Sequential SERIAL: 60050768018201bee000000000000035 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Host0Channel0/sda OPEN NORMAL 1 0 1 Host0Channel0/sdb OPEN NORMAL 6296 0 2 Host1Channel0/sdc OPEN NORMAL 6178 0 3 Host1Channel0/sdd OPEN NORMAL 0 0 [root@Palau ~]#
Tip: Run insserv boot.multipath multipathd to automatically load the multipath driver and multipathd daemon during startup. 2. Enable MPIO for RHEL5 by running the following commands: modprobe dm-multipath modprobe dm-round-robin service multipathd start chkconfig multipathd on
207
Example 5-41 shows the commands issued on a Red Hat Enterprise Linux 5.1 operating system.
Example 5-41 Starting MPIO daemon on Red Hat Enterprise Linux
~]# modprobe dm-round-robin ~]# multipathd start ~]# chkconfig multipathd on ~]#
3. Open the multipath.conf file and follow the instructions to enable multipathing for IBM devices. The file is located in the /etc directory. Example 5-42 shows editing using vi.
Example 5-42 Editing the multipath.conf file
[root@palau etc]# vi multipath.conf 4. Add the following entry to the multipath.conf file: device { vendor "IBM" product "2145" path_grouping_policy group_by_prio prio_callout "/sbin/mpath_prio_alua /dev/%n" } Note: Example multipath.conf files can be downloaded from the IBM Subsystem Device Driver for Linux website at http://ibm.com/support/docview.wss?uid=ssg1S4000107#DM 5. Restart the multipath daemon; see Example 5-43.
Example 5-43 Stopping and starting the multipath daemon
[root@palau ~]# service multipathd stop Stopping multipathd daemon: [root@palau ~]# service multipathd start Starting multipathd daemon:
[ [
OK OK
] ]
6. Type the multipath -dl command to see the mpio configuration. You will see two groups with two paths each. All paths must have the state [active][ready] and one group will be [enabled].
208
7. Use the fdisk command to create a partition on the SVC disk, as shown in Example 5-44.
Example 5-44 fdisk
[root@palau scsi]# fdisk -l Disk /dev/hda: 80.0 GB, 80032038912 bytes 255 heads, 63 sectors/track, 9730 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot /dev/hda1 * /dev/hda2 Start 1 14 End 13 9730 Blocks 104391 78051802+ Id 83 8e System Linux Linux LVM
Disk /dev/sda: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sda doesn't contain a valid partition table Disk /dev/sdb: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sdb doesn't contain a valid partition table Disk /dev/sdc: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sdc doesn't contain a valid partition table Disk /dev/sdd: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sdd doesn't contain a valid partition table Disk /dev/sde: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sde doesn't contain a valid partition table Disk /dev/sdf: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sdf doesn't contain a valid partition table Disk /dev/sdg: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sdg doesn't contain a valid partition table
209
Disk /dev/sdh: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sdh doesn't contain a valid partition table Disk /dev/dm-2: 4244 MB, 4244635648 bytes 255 heads, 63 sectors/track, 516 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk /dev/dm-2 doesn't contain a valid partition table Disk /dev/dm-3: 4244 MB, 4244635648 bytes 255 heads, 63 sectors/track, 516 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk /dev/dm-3 doesn't contain a valid partition table [root@palau scsi]# fdisk /dev/dm-2 Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): n Command action e extended p primary partition (1-4) e Partition number (1-4): 1 First cylinder (1-516, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-516, default 516): Using default value 516 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: Re-reading the partition table failed with error 22: Invalid argument. The kernel still uses the old table. The new table will be used at the next reboot. [root@palau scsi]# shutdown -r now
210
[root@palau ~]# mkfs -t ext3 /dev/dm-2 mke2fs 1.39 (29-May-2006) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 518144 inodes, 1036288 blocks 51814 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=1061158912 32 block groups 32768 blocks per group, 32768 fragments per group 16192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736 Writing inode tables: done Creating journal (16384 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 29 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@palau ~]# 9. Create a mount point, and mount the drive, as shown in Example 5-46.
Example 5-46 Mount point
[root@palau ~]# mkdir /svcdisk_0 [root@palau ~]# cd /svcdisk_0/ [root@palau svcdisk_0]# mount -t ext3 /dev/dm-2 /svcdisk_0 [root@palau svcdisk_0]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 73608360 1970000 67838912 3% / /dev/hda1 101086 15082 80785 16% /boot tmpfs 967984 0 967984 0% /dev/shm /dev/dm-2 4080064 73696 3799112 2% /svcdisk_0
211
2. Connect the server FC host adapters to the switches. 3. Configure the switches (zoning), as described in 5.9.4, VMware storage and zoning guidance on page 212. 4. Install the VMware operating system (if not already done) and check the HBA timeouts, as described in 5.9.5, Setting the HBA timeout for failover in VMware on page 213. 5. Configure the host, volumes, and host mapping in the SVC, as described in 5.9.7, Attaching VMware to volumes on page 214.
212
I/O. It is also able to handle concurrent access from multiple physical machines, because it enforces the appropriate access controls. Therefore, multiple ESX hosts can share the same set of LUNs. Theoretically, you can run all of your virtual machines on one LUN. However, for performance reasons in more complex scenarios, it can be better to load balance virtual machines over separate HBAs, storages, or arrays. If you run an ESX host, for example, with several virtual machines, it makes sense to use one slow array, for example, for Print and Active Directory Services guest operating systems without high I/O, and another fast array for database guest operating systems. Using fewer volumes has the following advantages: More flexibility to create virtual machines without creating new space on the SVC More possibilities for taking VMware snapshots Fewer volumes to manage Using more and smaller volumes has the following advantages: Separate I/O characteristics of the guest operating systems More flexibility (the multipathing policy and disk shares are set per volume) Microsoft Cluster Service requires its own volume for each cluster disk resource More documentation about designing your VMware infrastructure is provided at one of these websites: http://www.vmware.com/vmtn/resources/ http://www.vmware.com/resources/techresources/1059 Guidelines: ESX Server hosts that use shared storage for virtual machine failover or load balancing must be in the same zone. You can have only one VMFS volume per volume.
213
IBM_2145:ITSO-CLS1:admin>svcinfo lshost Nile id 1 name Nile port_count 2 type generic mask 1111 iogrp_count 2 WWPN 210000E08B892BCD node_logged_in_count 4 state active WWPN 210000E08B89B8C0 node_logged_in_count 4 state active Then, we have to set the SCSI Controller Type in VMware. By default, ESX Server disables the SCSI bus sharing and does not allow multiple virtual machines to access the same VMFS file at the same time; see Figure 5-35 on page 215. But in many configurations, such as those configurations for high availability, the virtual machines have to share the same VMFS file to share a disk. To set the SCSI Controller Type in VMware: 1. Log on to your Infrastructure Client, shut down the virtual machine, right-click it, and select Edit settings. 2. Highlight the SCSI Controller, and select one of the three available settings, depending on your configuration: None: Disks cannot be shared by other virtual machines. Virtual: Disks can be shared by virtual machines on the same server. Physical: Disks can be shared by virtual machines on any server. Click OK to apply the setting.
214
3. Create your volumes on the SVC, then map them to the ESX hosts. Tips: If you want to use features, such as VMotion, the volumes that own the VMFS file have to be visible to every ESX host that will be able to host the virtual machine. In SVC, select Allow the virtual disks to be mapped even if they are already mapped to a host. The volume has to have the same SCSI ID on each ESX host. For this configuration, we created one volume and mapped it to our ESX host, as shown in Example 5-49.
Example 5-49 Mapped volume to ESX host Nile
IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap Nile id name SCSI_id vdisk_id vdisk_name wwpn 1 Nile 0 12 VMW_pool 210000E08B892BCD 60050768018301BF2800000000000010
vdisk_UID
ESX does not automatically scan for SAN changes (except when rebooting the entire ESX server). If you have made any changes to your SVC or SAN configuration, perform the following steps: 1. Open your VMware Infrastructure Client. 2. Select the host. 3. In the Hardware window, choose Storage Adapters. 4. Click Rescan.
215
To configure a storage device to use it in VMware, perform the following steps: 1. Open your VMware Infrastructure Client. 2. Select the host for which you want to see the assigned volumes, and click the Configuration tab. 3. In the Hardware window on the left side, click Storage. 4. To create a new storage pool, select click here to create a datastore or Add storage if the yellow field does not appear (Figure 5-36).
5. The Add storage wizard will appear. 6. Select Create Disk/Lun, and click Next. 7. Select the SVC volume that you want to use for the datastore, and click Next. 8. Review the disk layout and click Next. 9. Enter a datastore name and click Next. 10.Select a block size, enter the size of the new partition, and then, click Next. 11.Review your selections, and click Finish. Now, the created VMFS datastore appears in the Storage window (Figure 5-37). You will see the details for the highlighted datastore. Check whether all of the paths are available and that the Path Selection is set to Round Robin.
216
If not all of the paths are available, check your SAN and storage configuration. After fixing the problem, select Refresh to perform a path rescan. The view will be updated to the new configuration. Best practice is to use the Round Robin Multipath Policy for SVC. If you have to edit this policy, perform the following steps: 1. Highlight the datastore. 2. Click Properties. 3. Click Managed Paths. 4. Click Change (see Figure 5-37 on page 216). 5. Select Round Robin. 6. Click OK. 7. Click Close. Now, your VMFS datastore has been created, and you can start using it for your guest operating systems. Round Robin will distribute the I/O load across all available paths. If you do want to use a fixed path, the policy setting Fixed is supported as well.
217
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk VMW_pool id 12 name VMW_pool IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name MDG_DS45 capacity 60.0GB type striped formatted yes mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018301BF2800000000000010 throttling 0 preferred_node_id 2 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS45 type striped mdisk_id mdisk_name 218
IBM System Storage SAN Volume Controller V6.3
fast_write_state empty used_capacity 60.00GB real_capacity 60.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize IBM_2145:ITSO-CLS1:admin>svctask expandvdisksize -size 5 -unit gb VMW_pool IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk VMW_pool id 12 name VMW_pool IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name MDG_DS45 capacity 65.0GB type striped formatted yes mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018301BF2800000000000010 throttling 0 preferred_node_id 2 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS45 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 65.00GB real_capacity 65.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize IBM_2145:ITSO-CLS1:admin>
219
2. Open the Virtual Infrastructure Client. 3. Select the host. 4. Select Configuration. 5. Select Storage Adapters. 6. Click Rescan. 7. Make sure that the Scan for new Storage Devices check box is marked, and click OK. After the scan has completed, the new capacity is displayed in the Details section. 8. Click Storage. 9. Right-click the VMFS volume and click Properties. 10.Click Add Extend. 11.Select the new free space, and click Next. 12.Click Next. 13.Click Finish. The VMFS volume has now been extended, and the new space is ready for use.
220
OS cluster support
Solaris with Symantec Cluster V4.1, Symantec SFHA and SFRAC V4.1/5.0, and Solaris with Sun Cluster V3.1/3.2 are supported at the time of writing.
221
222
Flat Space Addressing mode is used rather than the Peripheral Device Addressing Mode.
When an inquiry command for any page is sent to LUN 0 using Peripheral Device Addressing, it is reported as Peripheral Device Type 0Ch (controller). When any command other than an inquiry is sent to LUN 0 using Peripheral Device Addressing, SVC will respond as an unmapped LUN 0 normally responds. When an inquiry is sent to LUN 0 using Flat Space Addressing, it is reported as Peripheral Device Type 00h (Direct Access Device) if a LUN is mapped at LUN 0 or 1Fh Unknown Device Type. When an inquiry is sent to an unmapped LUN that is not LUN 0 using Peripheral Device Addressing, the Peripheral qualifier returned is 001b and the Peripheral Device type is 1Fh (unknown or no device type). This response is in contrast to the behavior for generic hosts, where peripheral Device Type 00h is returned.
223
It is also possible to configure SDDDSM to offer a web interface which provides some very basic information. Before this configuration can work, we need to configure the web interface. Sddsrv does not bind to any TCP/IP port by default, but it allows port binding to be dynamically enabled or disabled. For all platforms except Linux, the multipath driver package ships an sddsrv.conf template file named the sample_sddsrv.conf file. On all UNIX platforms except Linux, the sample_sddsrv.conf file is located in the /etc directory. On Windows platforms it is located in the directory where SDDDSM was installed. You must use the sample_sddsrv.conf file to create the sddsrv.conf file in the same directory as the sample_sddsrv.conf file by simply copying it and naming the copied file sddsrv.conf. You can then dynamically change port binding by modifying the parameters in the sddsrv.conf file and changing the values of Enableport and Loopbackbind to True. Figure 5-39 shows the start window of the multipath driver web interface.
225
226
Chapter 6.
Data migration
In this chapter we explain how to migrate from a conventional storage infrastructure to a virtualized storage infrastructure by using the IBM System Storage SAN Volume Controller (SVC). We also explain how the SVC can be phased out of a virtualized storage infrastructure, for example, after a trial period or after using the SVC as a data migration tool. Next, we describe how to migrate from a fully allocated volume to a thin-provisioned volume by using the volume mirroring feature and the thin-provisioned volume together. Finally, we provide you with examples of using intracluster Metro Mirror to migrate data.
227
228
If the type of the volume is image, then the volume type transitions to striped when the first extent is migrated. The MDisk access mode transitions from image to managed.
Using the -force flag: If the -force flag is not used and if volumes occupy extents on one or more of the MDisks that are specified, the command fails. When the -force flag is used and if volumes occupy extents on one or more of the MDisks that are specified, all extents on the MDisks will be migrated to the other MDisks in the storage pool if there are enough free extents in the storage pool. The deletion of the MDisks is postponed until all extents are migrated, which can take time. In the case where there are insufficient free extents in the storage pool, the command fails.
Rule: For the migration to be acceptable, the source and destination storage pool must have the same extent size. Note that volume mirroring can also be used to migrate a volume between storage pools. This method can be used if the extent sizes of the two pools are not the same.
229
In Figure 6-1, we illustrate volume V3 migrating from Pool 2 to Pool 3. Extents are allocated to the migrating volume from the set of MDisks in the target storage pool, using the extent allocation algorithm. The process can be prioritized by specifying the number of threads that will be used in parallel (from 1 to 4) while migrating; using only one thread will put the least background load on the system. The offline rules apply to both storage pools. Therefore, referring back to Figure 6-1, if any of the M4, M5, M6, or M7 MDisks go offline, then the V3 volume goes offline. If the M4 MDisk goes offline, then V3 and V5 go offline, but V1, V2, V4, and V6 remain online. If the type of the volume is image, then the volume type transitions to striped when the first extent is migrated. The MDisk access mode transitions from image to managed. For the duration of the move, the volume is listed as being a member of the original storage pool. For the purposes of configuration, the volume moves to the new storage pool instantaneously at the end of the migration.
230
Migrate image mode-to-image mode between storage pools. Migrate managed mode-to-image mode between storage pools. These conditions must apply to be able to migrate: The destination MDisk must be greater than or equal to the size of the volume. The MDisk that is specified as the target must be in an unmanaged state at the time that the command is run. If the migration is interrupted by a cluster recovery, the migration will resume after the recovery completes. If the migration involves moving between storage pools, the volume behaves as described in 6.2.3, Migrating a volume between storage pools on page 229. Regardless of the mode in which the volume starts, it is reported as being in managed mode during the migration. Also, both of the MDisks involved are reported as being in image mode during the migration. Upon completion of the command, the volume is classified as an image mode volume.
231
6.3.1 Parallelism
You can perform several of the following activities in parallel.
Per cluster
An SVC cluster supports up to 32 active concurrent instances of members of the set of migration activities: Migrate multiple extents Migrate between storage pools Migrate off of a deleted MDisk Migrate to image mode These high-level migration tasks operate by scheduling single extent migrations: Up to 256 single extent migrations can run concurrently. This number is made up of single extent migrates, which result from the operations previously listed. The Migrate Multiple Extents and Migrate Between storage pools commands support a flag that allows you to specify the number of parallel threads to use, between 1 and 4. This parameter affects the number of extents that will be concurrently migrated for that migration operation. Thus, if the thread value is set to 4, up to four extents can be migrated concurrently for that operation, subject to other resource constraints.
232
Per MDisk
The SVC supports up to four concurrent single extent migrates per MDisk. This limit does not take into account whether the MDisk is the source or the destination. If more than four single extent migrates are scheduled for a particular MDisk, further migrations are queued pending the completion of one of the currently running migrations.
Chunks
Regardless of the extent size for the storage pool, data is migrated in units of 16 MB. In this description, this unit is referred to as a chunk. We describe the algorithm that is used to migrate an extent: 1. Pause (pause means to queue all new I/O requests in the virtualization layer in SVC and to wait for all outstanding requests to complete) all I/O on the source MDisk on all nodes in the SVC cluster. The I/O to other extents is unaffected. 2. Unpause (resume) I/O on all of the source MDisk extents apart from writes to the specific chunk that is being migrated. Writes to the extent are mirrored to the source and destination. 3. On the node that is performing the migration, for each 256 KB section of the chunk: Synchronously read 256 KB from the source. Synchronously write 256 KB to the target. 4. After the entire chunk has been copied to the destination, repeat the process for the next chunk within the extent.
Chapter 6. Data migration
233
5. After the entire extent has been migrated, pause all I/O to the extent being migrated, perform a checkpoint on the extent move to on-disk metadata, redirect all further reads to the destination, and stop mirroring writes (writes only to destination). 6. If the checkpoint fails, the I/O is unpaused. During the migration, the extent can be divided into three regions, as shown in Figure 6-2. Region B is the chunk that is being copied. Writes to Region B are queued (paused) in the virtualization layer waiting for the chunk to be copied. Reads to Region A are directed to the destination, because this data has already been copied. Writes to Region A are written to both the source and the destination extent to maintain the integrity of the source extent. Reads and writes to Region C are directed to the source, because this region has yet to be migrated. The migration of a chunk requires 64 synchronous reads and 64 synchronous writes. During this time, all writes to the chunk from higher layers in the software stack (such as cache destages) are held back. If the back-end storage is operating with significant latency, it is possible that this operation might take time (minutes) to complete, which can have an adverse affect on the overall performance of the SVC. To avoid this situation, if the migration of a particular chunk is still active after one minute, the migration is paused for 30 seconds. During this time, writes to the chunk are allowed to proceed. After 30 seconds, the migration of the chunk is resumed. This algorithm is repeated as many times as necessary to complete the migration of the chunk.
16 MB
Figure 6-2 Migrating an extent
Not to scale
SVC guarantees read stability during data migrations even if the data migration is stopped by a node reset or a cluster shutdown. This read stability is possible because SVC disallows writes on all nodes to the area being copied, and upon a failure, the extent migration is restarted from the beginning. At the conclusion of the operation, we will have these results: Extents migrated in 16 MB chunks, one chunk at a time. Chunks are either copied, in progress, or not copied. When the extent is finished, its new location is saved.
234
Figure 6-3 shows the data migration and write operation relationship.
MDisk modes
There are three MDisk modes: Unmanaged MDisk An MDisk is reported as unmanaged when it is not a member of any storage pool. An unmanaged MDisk is not associated with any volumes and has no metadata stored on it. The SVC will not write to an MDisk that is in unmanaged mode except when it attempts to change the mode of the MDisk to one of the other modes. Image mode MDisk Image mode provides a direct block-for-block translation from the MDisk to the volume with no virtualization. Image mode volumes have a minimum size of one block (512 bytes) and always occupy at least one extent. An image mode MDisk is associated with exactly one volume. Managed mode MDisk Managed mode Mdisks contribute extents to the pool of available extents in the storage pool. Zero or more managed mode volumes might use these extents.
235
Managed mode to unmanaged mode This transition occurs when an MDisk is removed from a storage pool. Unmanaged mode to image mode This transition occurs when an image mode MDisk is created on an MDisk that was previously unmanaged. It also occurs when an MDisk is used as the target for a migration to image mode. Image mode to unmanaged mode There are two distinct ways in which this transition can happen: When an image mode volume is deleted. The MDisk that supported the volume becomes unmanaged. When an image mode volume is migrated in image mode to another MDisk, the MDisk that is being migrated from remains in image mode until all data has been moved off of it. It then transitions to unmanaged mode. Image mode to managed mode This transition occurs when the image mode volume that is using the MDisk is migrated into managed mode. Managed mode to image mode is impossible There is no operation that will take an MDisk directly from managed mode to image mode. You can achieve this transition by performing operations that convert the MDisk to unmanaged mode and then to image mode.
add to group
Not in group
remove from group
Managed mode
complete migrate
Image mode
Image mode volumes have the special property that the last extent in the volume can be a partial extent. Managed mode disks do not have this property. To perform any type of migration activity on an image mode volume, the image mode disk must first be converted into a managed mode disk. If the image mode disk has a partial last
236
extent, this last extent in the image mode volume must be the first extent to be migrated. This migration is handled as a special case. After this special migration operation has occurred, the volume becomes a managed mode volume and is treated in the same way as any other managed mode volume. If the image mode disk does not have a partial last extent, no special processing is performed. The image mode volume is simply changed into a managed mode volume and is treated in the same way as any other managed mode volume. After data is migrated off a partial extent, there is no way to migrate data back onto the partial extent.
237
Migrating your volume to an image mode volume Perform this activity if you are removing the SVC from your SAN environment after a trial period. We describe this step in detail in 6.5.5, Migrating a volume from managed mode to image mode on page 263. Moving an image mode volume to another image mode volume Use this procedure to migrate data from one storage subsystem to another storage subsystem. We describe this step in detail in 6.6.6, Migrating the volumes to image mode volumes on page 299. You can use these activities individually or together to migrate your servers LUNs from one storage subsystem to another storage subsystem using the SVC as your migration tool. The only downtime that is required for these activities is the time that it takes you to remask and remap the LUNs between the storage subsystems and your SVC.
6.5.1 Windows Server 2008 host system connected directly to the LSI 3500
In our example configuration, we use a Windows Server 2008 host and a LSI 3500 Storage Box. The host has two LUNs (drive X and Y). The two LUNs are part of one LSI 3500 array. Before the migration, LUN masking is defined in the LSI 3500 to give access to the Windows Server 2008 host system for the volumes from LSI 3500 labeled X and Y (see Figure 6-6 on page 239). Figure 6-5 shows the starting zoning scenario.
Figure 6-6 on page 239 shows the two LUNs (drive X and Y).
238
239
Figure 6-7 shows the properties of one of the LSI 3500 disks using the Subsystem Device Driver DSM (SDDDSM). The disk appears as an LSI INF-01-00 Multipath Disk Device.
240
6.5.2 Adding the SVC between the host system and the LSI 3500
Figure 6-8 shows the new environment with the SVC and a second storage subsystem attached to the SAN. The second storage subsystem is not required to migrate to the SVC, but in the following examples, we show that it is possible to move data across storage subsystems without any host downtime.
To add the SVC between the host system and the LSI 3500 storage subsystem, perform the following steps: 1. Check that you have installed supported device drivers on your host system. 2. Check that your SAN environment fulfills the supported zoning configurations. 3. Shut down the host. 4. Change the LUN masking in the LSI 3500. Mask the LUNs to the SVC, and remove the masking for the host. Figure 6-9 on page 242 shows the two LUNs with LUN IDs 10 and 11 remapped to SVC ITSOSVC1.
241
Attention: To avoid potential data loss, back up all the data stored on your external storage before using the wizard. 5. Logon to your SVC Console and open Pools and System Migration; see Figure 6-10.
242
6. Click Start New Migration; this will start a wizard as shown in Figure 6-11 on page 243.
7. Follow the Storage Migration Wizard as shown in Figure 6-12, then click Next.
Chapter 6. Data migration
243
8. Figure 6-13 on page 245 shows the Prepare Environment for Migration information; click Next.
244
Figure 6-13 Migration Wizard - Step 2 of 8 - preparing the environment for migration
245
11.Figure 6-16 shows the available MDisks for Migration; click Next.
12.Mark both MDisks for migrating as shown in Figure 6-17 on page 247, and then click Next.
246
13.Figure 6-18 shows the MDisk import process. During the import process a new storage pool is automatically created, in our case Migrationpool_8192. You can see the command that the wizard is issuing is creating an image mode volume with a one-to-one mapping to mdisk5. Click Close to continue.
14.Now we create a new host object that we will later map the volume to. Click New Host as shown in Figure 6-19 on page 248.
247
15.Figure 6-20 shows the empty fields that we need to complete to match our host requirements.
248
16.Here you type the name you want to use for the Host, add the Fibre Channel port, and then select a Host Type. In our case, the name is W2k8_Server. Click Create Host as shown in Figure 6-21 on page 250.
249
250
18.Figure 6-23 on page 251 shows that the host was created successfully. Click Next to continue.
19.Figure 6-24 shows all the available volumes to map to a host. Click Next to continue.
251
20.Mark both volumes and click Map to Host as shown in Figure 6-25 on page 252.
21.Modify Mapping by choosing the host using the drop-down menu as shown in Figure 6-26, and then click Next.
252
22.The rightmost side of Figure 6-27 on page 253 shows the volumes that can be marked to map to your host. Mark both volumes and click Apply.
23.Figure 6-28 shows the progress of the volume mapping to host. Click Close when finished.
24.After the volume to host mapping task is completed, notice that beneath the column heading Host Mapping a host is shown marked Yes; see Figure 6-29 on page 254. Click Next.
253
25.Select the storage pool you want to use for migration, in our case STGPool_DS3500-2 as shown in Figure 6-30, and click Next.
Figure 6-30 Migration Wizard - Step 7 - selecting a storage pool to use for migration
26.Migration starts automatically by doing a volume copy, as shown in Figure 6-31 on page 255.
254
27.Figure 6-32 then appears, advising that migration has begun. Click Finish.
28.The window in Figure 6-33 on page 256 will appear automatically to show the progress of the migration.
255
29.Go to Volumes Volumes by host as shown in Figure 6-34 to see all the volumes served by the newly created host for this migration step.
30.Figure 6-35 on page 257 shows all the volumes (copy0* and copy1) served by the created host.
256
You can see in Figure 6-35 that the migrated volume is actually a mirrored volume with one copy on the image mode pool and another copy in a managed mode storage pool. The administrator can choose to leave the volume like this or split the initial copy from the mirror.
6.5.3 Importing the migrated disks into an online Windows Server 2008 host
To import the migrated disks into an online Windows 2008 Server host, perform these steps: 1. Start the Windows Server 2008 host system again, and Go to the Device Manager to see the new disk properties changed to a 2145 Multi-Path Disk Device (Figure 6-36 on page 258).
257
258
3. Select Start All Programs Subsystem Device Driver DSM Subsystem Device Driver DSM to open the SDDDSM command-line utility; see Figure 6-38.
4. Enter the datapath query device command to check whether all paths are available, as planned in your SAN environment; see Example 6-1.
Example 6-1 The datapath query device command
DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801AF813F1000000000000029 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port7 Bus0/Disk1 Part0 OPEN NORMAL 145 0 1 Scsi Port7 Bus0/Disk1 Part0 OPEN NORMAL 75 0 2 Scsi Port8 Bus0/Disk1 Part0 OPEN NORMAL 73 0 3 Scsi Port8 Bus0/Disk1 Part0 OPEN NORMAL 0 0 4 Scsi Port8 Bus0/Disk1 Part0 OPEN NORMAL 0 0 5 Scsi Port7 Bus0/Disk1 Part0 OPEN NORMAL 0 0 6 Scsi Port7 Bus0/Disk1 Part0 OPEN NORMAL 0 0 7 Scsi Port8 Bus0/Disk1 Part0 OPEN NORMAL 76 0
259
DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801AF813F100000000000002A ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port7 Bus0/Disk2 Part0 OPEN NORMAL 0 0 1 Scsi Port7 Bus0/Disk2 Part0 OPEN NORMAL 0 0 2 Scsi Port8 Bus0/Disk2 Part0 OPEN NORMAL 0 0 3 Scsi Port8 Bus0/Disk2 Part0 OPEN NORMAL 94 0 4 Scsi Port8 Bus0/Disk2 Part0 OPEN NORMAL 77 0 5 Scsi Port7 Bus0/Disk2 Part0 OPEN NORMAL 76 0 6 Scsi Port8 Bus0/Disk2 Part0 OPEN NORMAL 0 0 7 Scsi Port7 Bus0/Disk2 Part0 OPEN NORMAL 68 0 C:\Program Files\IBM\SDDDSM>
6.5.4 Adding the SVC between the host and LSI3500 using the CLI
In this section we only use CLI commands to add direct attached storage to the SVCs managed storage. To read about our preparation of the environment see 6.5.1, Windows Server 2008 host system connected directly to the LSI 3500 on page 238.
IBM_2145:ITSO_SVC1:admin>svctask mkmdiskgrp -name imagepool -tier generic_hdd -easytier off -ext 256 MDisk Group, id [2], successfully created IBM_2145:ITSO_SVC1:admin>
260
IBM_2145:ITSO_SVC1:admin>svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning easy_tier easy_tier_status 0 STGPool_DS3500-1 online 3 0 382.50GB 256 382.50GB 0.00MB 0.00MB 0.00MB 0 0 auto inactive 1 STGPool_DS3500-2 online 3 2 384.00GB 256 354.00GB 30.00GB 30.00GB 30.00GB 7 0 auto inactive 2 imagepool online 0 0 0 256 0 0.00MB 0.00MB 0.00MB 0 0 off inactive 3 STGPool_Multi_Tier online 2 0 20.00GB 256 20.00GB 0.00MB 0.00MB 0.00MB 0 0 auto inactive 4 MigrationPool_8192 online 2 2 30.00GB 8192 0 30.00GB 30.00GB 30.00GB 100 0 auto inactive IBM_2145:ITSO_SVC1:admin>
IBM_2145:ITSO_SVC1:admin>svctask mkvdisk -name image1 -iogrp 0 -mdiskgrp imagepool -vtype image -mdisk mdisk11 -syncrate 80 Virtual Disk, id [0], successfully created IBM_2145:ITSO_SVC1:admin>svctask mkvdisk -name image2 -iogrp 0 -mdiskgrp imagepool -vtype image -mdisk mdisk12 -syncrate 80 Virtual Disk, id [1], successfully created IBM_2145:ITSO_SVC1:admin>
IBM_2145:ITSO_SVC1:admin>svcinfo lsvdisk id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state se_copy_count compressed_copy_count RC_change 0 image1 0 io_grp0 online 2 imagepool 20.00GB image 6005076801AF813F100000000000002B 0 1 empty 0 0 no 1 image2 0 io_grp0 online 2 imagepool 10.00GB image 6005076801AF813F100000000000002C 0 1 empty 0 0 no
261
IBM_2145:ITSO_SVC1:admin> IBM_2145:ITSO_SVC1:admin>svctask mkvdiskhostmap -force -host W2K8_HYPERV1 -scsi 0 image1 Virtual Disk to Host map, id [0], successfully created IBM_2145:ITSO_SVC1:admin>svctask mkvdiskhostmap -force -host W2K8_HYPERV1 -scsi 1 image2 Virtual Disk to Host map, id [1], successfully created
IBM_2145:ITSO_SVC1:admin>svctask addvdiskcopy -mdiskgrp STGPool_DS3500-2 image1 Vdisk [0] copy [1] successfully created IBM_2145:ITSO_SVC1:admin>svctask addvdiskcopy -mdiskgrp STGPool_DS3500-2 image2 Vdisk [1] copy [1] successfully created
IBM_2145:ITSO_SVC1:admin>svcinfo lsvdisk id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state se_copy_count compressed_copy_count RC_change 0 image1 0 io_grp0 online many many 20.00GB many 6005076801AF813F100000000000002B 0 2 empty 0 0 no 1 image2 0 io_grp0 online many many 10.00GB many 6005076801AF813F100000000000002C 0 2 empty 0 0 no IBM_2145:ITSO_SVC1:admin>
262
3. To create an empty storage pool for migration, perform Step 1 and Step 2 as shown in Figure 6-40 on page 264 and Figure 6-41 on page 264.
263
4. Figure 6-42 reminds you that an empty storage pool has been created. Click OK.
264
5. Figure 6-43 on page 265 shows the progress status of creating a storage pool for migration. Click Close to continue.
6. From the Volumes > All Volumes panel, select the volume that you want to migrate to image mode and select Export to Image Mode from the drop-down menu as shown in Figure 6-44.
265
7. Select the MDisk to migrate the volume onto, as shown in Figure 6-45 on page 266, and then click Next.
266
8. Select a storage pool in which the image mode volume will be placed after migration is completed, in our case for migration, and click Finish; see Figure 6-46.
9. The volume is exported to image mode and placed in the For Migration pool; see Figure 6-47. Click Close.
10.Navigate to the Pools >MDisk by Pools section; click on the + (expand button) - notice that MDisk6 is now an image mode MDisk as shown in Figure 6-48.
267
11.Repeat these steps for every volume that you want to migrate to an image mode volume. 12.Delete the image mode data from the SVC by using the procedure described in 6.5.7, Removing image mode data from the SVC on page 278.
268
configured on the storage and mapped to the SVC cluster. The LUN is available to the SVC as an unmanaged MDisk8 as shown in Figure 6-49.
To migrate the image mode volume to another image mode volume, perform the following steps: 1. Mark the unmanaged MDisk8 and click either Actions or the right-side mouse button and select Import from the list as shown in Figure 6-50.
269
2. The Introduction window opens describing the process of importing the MDisk and mapping an image mode volume to it, as shown in Figure 6-51. Click Next.
3. Do not select a target pool because you do not want to migrate into an SVC managed volume pool. Instead, simply click Finish; see Figure 6-52 on page 270.
4. Figure 6-53 shows a warning message indicating a storage pool has not been selected and the volume will remain in the temporary pool. Click OK to continue.
270
5. The import process starts, as shown in Figure 6-54, by creating a temporary storage pool Migrationpool_8192 (8 GB) and an image volume. Click Close to continue.
Figure 6-54 Import of MDisk and creation of temporary storage pool Migrationpool_8192
271
6. As shown in Figure 6-55, there is now an image mode mdisk8 with the import controller name and SCSI ID as its name.
7. Now create a new storage pool Migration_out (with a same extent size (8 GB) as the automatically created storage pool Migrationpool_8192) for transferring the image mode disk. Go to Pools Mdisks byPool, as shown in Figure 6-56.
8. Click New Pool to create an empty storage pool, as shown in Figure 6-57.
272
9. Give your new storage pool the meaningful name Migration_out and click the Advanced Settings drop-down menu. Choose 8 GB as the extent size for your new storage pool, as shown in Figure 6-58.
Figure 6-58 Step 1 of 2 - create an empty storage pool with extent size 8 GB
10.Figure 6-59 shows a storage pool window without any disks. Click Finish to continue to create an empty storage pool.
273
11.The warning in Figure 6-60 on page 274 pops up to remind you that an empty storage pool will be created. Click OK to continue.
12.Figure 6-61 shows the progress of creating the storage pool Migration_out. Click Close to continue.
274
13.The empty storage pool for image to image migration has been created. Go to Volumes Volumes by Pool as shown in Figure 6-62.
14.Select the storage pool of the imported disk, Migrationpool_8192 in the left panel. Then mark the image disk you want to migrate out and select Actions. From the drop-down menu select Export to Image Mode, as shown in Figure 6-63.
275
15.Select the target MDisk on the new disk controller that you want to migrate to. Click Next, as shown in Figure 6-64.
16.Select the target migrate out (empty) storage pool, as shown in Figure 6-65. Click Finish.
17.Figure 6-66 shows the progress status of the Export Volume to Image process. Click Close to continue.
276
18.Figure 6-67 on page 277 shows that the MDisk location has changed as expected to the new storage pool Migration_out.
19.Repeat these steps for all image mode volumes that you want to migrate. 20.If you want to delete the data from the SVC, use the procedure described in 6.5.7, Removing image mode data from the SVC on page 278.
277
278
If the command succeeds on an image mode volume, the underlying back-end storage controller will be consistent with the data that a host might previously have read from the image mode volume; that is, all fast write data will have been flushed to the underlying LUN. Deleting an image mode volume causes the MDisk that is associated with the volume to be ejected from the storage pool. The mode of the MDisk will be returned to unmanaged. Note: This situation only applies to image mode volumes. If you delete a normal volume, all of the data will also be deleted. As shown in Example 6-1 on page 259, the SAN disks currently reside on the SVC 2145 device. Check that you have installed the supported device drivers on your host system. To switch back to the storage subsystem, perform the following steps: 1. Shut down your host system. 2. Open the view Volumes by Host window to see which volumes are currently mapped to your host as shown in Figure 6-68.
3. Check your Host and select your volume. Then, show the drop-down menu by clicking the right mouse button and select Unmap all Hosts as shown in Figure 6-69 on page 280.
279
4. Verify your unmap process, as shown in Figure 6-70, and click Unmap.
5. Figure 6-71 shows that the volume has been removed from the SVC.
280
6. Repeat steps 3 to 5 for every image mode volume that you want to remove from the SVC. 7. Edit the LUN masking on your storage subsystem. Remove the SVC from the LUN masking, and add the host to the masking. 8. Power on your host system.
6.5.8 Map the free disks onto the Windows Server 2008
To detect and map the disks which have been freed from SVC management, go to the WIndows Server 2008: 1. Using your LSI 3500 Storage Manager interface, now remap the two LUNs that were MDisks back to your Windows Server 2008 server. 2. Open your Device Manager window. Figure 6-72 on page 282 shows that the LUNs are now back to an LSI INF 01-00 type.
281
3. Open your Disk Management window and notice that the disks have appeared. You might need to reactivate your disk by using the right-click option on each disk.
282
283
You can use these three activities individually, or together, to migrate your Linux servers LUNs from one storage subsystem to another storage subsystem using the SVC as your migration tool. If you do not use all three activities, you can introduce or remove the SVC from your environment. The only downtime required for these activities is the time that it takes to remask and remap the LUNs between the storage subsystems and your SVC. In Figure 6-74, we show our Linux environment.
SAN
Green Zone
Figure 6-74 shows our Linux server connected to our SAN infrastructure. It has two LUNs that are masked directly to it from our storage subsystem: The LUN with SCSI ID 0 has the host operating system (our host is Red Hat Enterprise Linux V5.1), and this LUN is used to boot the system directly from the storage subsystem. The operating system identifies it as /dev/mapper/VolGroup00-LogVol00. SCSI LUN ID 0: To successfully boot a host off of the SAN, you must have assigned the LUN as SCSI LUN ID 0. Linux sees this LUN as our /dev/sda disk. We have also mapped a second disk (SCSI LUN ID 1) to the host. It is 5 GB in size, and it is mounted in the / data folder on the /dev/dm-2 disk. Example 6-11 on page 285 shows our disks that are directly attached to the Linux hosts.
284
[root@Palau data]# df Filesystem 1K-blocks /dev/mapper/VolGroup00-LogVol00 10093752 /dev/sda1 101086 tmpfs 1033496 /dev/dm-2 5160576 [root@Palau data]#
Used Available Use% Mounted on 1971344 12054 0 158160 7601400 83813 1033496 4740272 21% 13% 0% 4% / /boot /dev/shm /data
Our Linux server represents a typical SAN environment with a host directly using LUNs that were created on a SAN storage subsystem, as shown in Figure 6-74 on page 284: The Linux servers host bus adapter (HBA) cards are zoned so that they are in the Green Zone with our storage subsystem. The two LUNs that have been defined on the storage subsystem, using LUN masking, are directly available to our Linux server.
285
SAN
By Pinocchio 12-09-2005
IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name Palau_Pool1 -ext 512 MDisk Group, id [2], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning easy_tier easy_tier_status 2 Palau_Pool1 online 0 0 0 512 0 0.00MB 0.00MB 0.00MB 0 0 auto inactive
286
0 0.00MB
0 0
512 0
0 auto
IBM_2145:ITSO-CLS1:admin>svcinfo lshbaportcandidate id 210000E08B89C1CD 210000E08B054CAA 210000E08B0548BC 210000E08B0541BC 210000E08B89CCC2 IBM_2145:ITSO-CLS1:admin> If you do not know the WWN of your Linux server, you can look at which WWNs are currently configured on your storage subsystem for this host. Figure 6-76 shows our configured ports on an IBM DS4700 storage subsystem.
287
After verifying that the SVC can see our host (linux2), we create the host entry and assign the WWN to this entry. Example 6-14 shows these commands.
Example 6-14 Create the host entry
IBM_2145:ITSO-CLS1:admin>svctask mkhost -name Palau -hbawwpn 210000E08B054CAA:210000E08B89C1CD Host, id [0], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lshost Palau id 0 name Palau port_count 2 type generic mask 1111 iogrp_count 4 WWPN 210000E08B89C1CD node_logged_in_count 4 state inactive WWPN 210000E08B054CAA node_logged_in_count 4 state inactive IBM_2145:ITSO-CLS1:admin>
IBM_2145:ITSO-CLS1:admin>svcinfo lscontroller id controller_name ctrl_s/n vendor_id product_id_low product_id_high 0 DS4500 IBM 1742-900 1 DS4700 IBM 1814 FAStT IBM_2145:ITSO-CLS1:admin> You can rename the storage subsystem to a more meaningful name (if we had multiple storage subsystems that were connected to our SAN fabric, renaming them makes it considerably easier to identify them) with the svctask chcontroller -name command.
288
289
Before we move the LUNs to the SVC, we must configure the host multipath configuration for the SVC. Add the following entry to your multipath.conf file, as shown in Example 6-16, and add the content of Example 6-17 to the file.
Example 6-16 Edit the multipath.conf file
[root@Palau ~]# vi /etc/multipath.conf [root@Palau ~]# service multipathd stop Stopping multipathd daemon: [root@Palau ~]# service multipathd start Starting multipathd daemon: [root@Palau ~]#
Example 6-17 Data to add to the multipath.conf file
[ [
OK OK
] ]
# SVC device { vendor "IBM" product "2145CF8" path_grouping_policy group_by_serial } We are now ready to move the ownership of the disks to the SVC, discover them as MDisks, and give them back to the host as volumes.
290
3. Using Storage Manager (our storage subsystem management tool), we can unmap and unmask the disks from the Linux server and remap and remask the disks to the SVC. LUN IDs: Even though we are using boot from SAN, you can also map the boot disk with any LUN number to the SVC. It does not have to be 0 until later when we configure the mapping in the SVC to the host. 4. From the SVC, discover the new disks with the svctask detectmdisk command. The disks will be discovered and named mdiskN, where N is the next available MDisk number (starting from 0). Example 6-18 shows the commands that we used to discover our MDisks and to verify that we have the correct MDisks.
Example 6-18 Discover the new MDisks
IBM_2145:ITSO-CLS1:admin>svctask detectmdisk IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID tier 26 mdisk26 online unmanaged 12.0GB 0000000000000008 DS4700 600a0b800026b2820000428f48739bca00000000000000000000000000000000 generic_hdd 27 mdisk27 online unmanaged 5.0GB 0000000000000009 DS4700 600a0b800026b282000042f84873c7e100000000000000000000000000000000 generic_hdd IBM_2145:ITSO-CLS1:admin> Important: Match your discovered MDisk serial numbers (UID on the svcinfo lsmdisk task display) with the serial number that you recorded earlier (in Figure 6-77 and Figure 6-78 on page 289). 5. After we have verified that we have the correct MDisks, we rename them to avoid confusion in the future when we perform other MDisk-related tasks (Example 6-19).
Example 6-19 Rename the MDisks
IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name md_palauS mdisk26 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name md_palauD mdisk27 IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID tier 26 md_palauS online unmanaged 12.0GB 0000000000000008 DS4700 600a0b800026b2820000428f48739bca00000000000000000000000000000000 generic_hdd 27 md_palauD online unmanaged 5.0GB 0000000000000009 DS4700 600a0b800026b282000042f84873c7e100000000000000000000000000000000 generic_hdd IBM_2145:ITSO-CLS1:admin>
291
6. We create our image mode volumes with the svctask mkvdisk command and the -vtype image option (Example 6-20). This command virtualizes the disks in the exact same layout as though they were not virtualized.
Example 6-20 Create the image mode volumes
IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp Palau_Pool1 -iogrp 0 -vtype image -mdisk md_palauS -name palau_SANB Virtual Disk, id [29], successfully created IBM_2145:ITSO-CLS1:admin> IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp Palau_Pool2 -iogrp 0 -vtype image -mdisk md_palauD -name palau_Data Virtual Disk, id [30], successfully create IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID tier 26 md_palauS online image 2 Palau_Pool1 12.0GB 0000000000000008 DS4700 600a0b800026b2820000428f48739bca00000000000000000000000000000000 generic_hdd 27 md_palauD online image 3 Palau_Pool2 5.0GB 0000000000000009 DS4700 600a0b800026b282000042f84873c7e100000000000000000000000000000000 generic_hdd IBM_2145:ITSO-CLS1:admin> IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_wri te_state se_copy_count 29 palau_SANB 0 io_grp0 online 4 Palau_Pool1 12.0GB image 60050768018301BF280000000000002B 0 1 empty 0 30 palau_Data 0 io_grp0 online 4 Palau_Pool2 5.0GB image 60050768018301BF280000000000002C 0 1 empty 0 7. Map the new image mode volumes to the host (Example 6-21). Important: Make sure that you map the boot volume with SCSI ID 0 to your host. The host must be able to identify the boot volume during the boot process.
Example 6-21 Map the volumes to the host
IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Palau -scsi 0 palau_SANB Virtual Disk to Host map, id [0], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Palau -scsi 1 palau_Data Virtual Disk to Host map, id [1], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap Palau id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID 0 Palau 0 29 palau_SANB 210000E08B89C1CD 60050768018301BF280000000000002B 292
IBM System Storage SAN Volume Controller V6.3
palau_Data
FlashCopy: While the application is in a quiescent state, you can choose to use FlashCopy to copy the new image volumes onto other volumes. You do not need to wait until the FlashCopy process has completed before starting your application. 8. Power on your host server and enter your Fibre Channel (FC) HBA adapter BIOS before booting the operating system, and make sure that you change the boot configuration so that it points to the SVC. In our example, we performed the following steps on a QLogic HBA: a. Press Ctrl+Q to enter the HBA BIOS. b. Open Configuration Settings. c. Open Selectable Boot Settings. d. Change the entry from your storage subsystem to the SVC 2145 LUN with SCSI ID 0. e. Exit the menu and save your changes. 9. Boot up your Linux operating system. If you only moved the application LUN to the SVC and left your Linux server running, you only need to follow these steps to see the new volume: a. Load your HBA driver with the modprobe DRIVER_NAME command. If you did not (and cannot) unload your HBA driver, you can issue commands to the kernel to rescan the SCSI bus to see the new volumes (these details are beyond the scope of this book). b. Check your syslog, and verify that the kernel found the new volumes. On Red Hat Enterprise Linux, the syslog is stored in the /var/log/messages directory. c. If your application and data are on an LVM volume, rediscover the VG and then run the vgchange -a y VOLUME_GROUP command to activate the VG. 10.Mount your file systems with the mount /MOUNT_POINT command (Example 6-22). The df output shows us that all of disks are available again.
Example 6-22 Mount data disk
[root@Palau data]# mount /dev/dm-2 /data [root@Palau data]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 10093752 1938056 7634688 21% / /dev/sda1 101086 12054 83813 13% /boot tmpfs 1033496 0 1033496 0% /dev/shm /dev/dm-2 5160576 158160 4740272 4% /data [root@Palau data]# 11.You are now ready to start your application.
293
IBM_2145:ITSO-CLS1:admin>svctask detectmdisk IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MD_palauVD -ext 512 MDisk Group, id [8], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID tier 26 md_palauS online image 2 Palau_Pool1 12.0GB 0000000000000008 DS4700 600a0b800026b2820000428f48739bca00000000000000000000000000000000 generic_hdd 27 md_palauD online image 3 Palau_Pool2 5.0GB 0000000000000009 DS4700 600a0b800026b282000042f84873c7e100000000000000000000000000000000 generic_hdd 28 mdisk28 online unmanaged 8.0GB 0000000000000010 DS4500 600a0b8000174233000000b9487778ab00000000000000000000000000000000 generic_hdd 29 mdisk29 online unmanaged 8.0GB 0000000000000011 DS4500 600a0b80001744310000010f48776bae00000000000000000000000000000000 generic_hdd 30 mdisk30 online unmanaged 8.0GB 0000000000000012 DS4500 600a0b8000174233000000bb487778d900000000000000000000000000000000 generic_hdd IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name palau-md1 mdisk28 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name palau-md2 mdisk29 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name palau-md3 mdisk30 IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk palau-md1 MD_palauVD IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk palau-md2 MD_palauVD IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk palau-md3 MD_palauVD IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID tier 26 md_palauS online image 2 Palau_Pool1 12.0GB 0000000000000008 DS4700 600a0b800026b2820000428f48739bca00000000000000000000000000000000 generic_hdd 27 md_palauD online image 3 Palau_Pool2 5.0GB 0000000000000009 DS4700 600a0b800026b282000042f84873c7e100000000000000000000000000000000 generic_hdd 28 palau-md1 online unmanaged 8 MD_palauVD 8.0GB 0000000000000010 DS4500 600a0b8000174233000000b9487778ab00000000000000000000000000000000 generic_hdd
294
29 palau-md2 online unmanaged 8 MD_palauVD 8.0GB 0000000000000011 DS4500 600a0b80001744310000010f48776bae00000000000000000000000000000000 generic_hdd 30 palau-md3 online unmanaged 8 MD_palauVD 8.0GB 0000000000000012 DS4500 600a0b8000174233000000bb487778d900000000000000000000000000000000 generic_hdd IBM_2145:ITSO-CLS1:admin>
IBM_2145:ITSO-CLS1:admin>svctask migratevdisk -vdisk palau_SANB -mdiskgrp MD_palauVD IBM_2145:ITSO-CLS1:admin>svctask migratevdisk -vdisk palau_Data -mdiskgrp MD_palauVD IBM_2145:ITSO-CLS1:admin>svcinfo lsmigrate migrate_type MDisk_Group_Migration progress 25 migrate_source_vdisk_index 29 migrate_target_mdisk_grp 8 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type MDisk_Group_Migration progress 70 migrate_source_vdisk_index 30 migrate_target_mdisk_grp 8 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS1:admin> After this task has completed, Example 6-25 shows that the volumes are now spread over three MDisks.
Example 6-25 Migration complete
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp MD_palauVD id 8 name MD_palauVD status online mdisk_count 3 vdisk_count 2 capacity 24.0GB extent_size 512 free_capacity 7.0GB virtual_capacity 17.00GB used_capacity 17.00GB
Chapter 6. Data migration
295
real_capacity 17.00GB overallocation 70 warning 0 IBM_2145:ITSO-CLS1:admin>svcinfo lsvdiskmember palau_SANB id 28 29 30 IBM_2145:ITSO-CLS1:admin>svcinfo lsvdiskmember palau_Data id 28 29 30 IBM_2145:ITSO-CLS1:admin> Our migration to striped volumes on another storage subsystem (DS4500) is now complete. The original MDisks (palau-md1, palau-md2, and palau-md3) can now be removed from the SVC, and these LUNs can be removed from the storage subsystem. If these LUNs are the last LUNs that were used on our DS4700 storage subsystem, we can remove it from our SAN fabric.
296
SAN
IBM_2145:ITSO-CLS1:admin>svcinfo lscontroller id controller_name ctrl_s/n vendor_id product_id_high 0 controller0 IBM FAStT IBM_2145:ITSO-CLS1:admin>
product_id_low 1814
It is also a good idea to rename the new storage subsystems controller to a more useful name, which can be done with the svctask chcontroller -name command as in Example 6-27 on page 298.
297
IBM_2145:ITSO-CLS1:admin>svctask chcontroller -name ITSO-4700 0 IBM_2145:ITSO-CLS1:admin> Also verify that controller name was changed as you wanted, as shown in Example 6-28.
Example 6-28 Recheck controller name
IBM_2145:ITSO-CLS1:admin>svcinfo lscontroller id controller_name ctrl_s/n vendor_id product_id_high 0 ITSO-4700 IBM FAStT IBM_2145:ITSO-CLS1:admin>
product_id_low 1814
IBM_2145:ITSO-CLS1:admin>svctask detectmdisk IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 0 mdisk0 online managed 600a0b800026b282000042f84873c7e100000000000000000000000000000000 28 palau-md1 online managed 8 MD_palauVD 8.0GB 0000000000000010 DS4500 600a0b8000174233000000b9487778ab00000000000000000000000000000000 29 palau-md2 online managed 8 MD_palauVD 8.0GB 0000000000000011 DS4500 600a0b80001744310000010f48776bae00000000000000000000000000000000 30 palau-md3 online managed 8 MD_palauVD 8.0GB 0000000000000012 DS4500 600a0b8000174233000000bb487778d900000000000000000000000000000000 31 mdisk31 online unmanaged 6.0GB 0000000000000013 DS4500 600a0b8000174233000000bd4877890f00000000000000000000000000000000 32 mdisk32 online unmanaged 12.5GB 0000000000000014 DS4500 600a0b80001744310000011048777bda00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin> Even though the MDisks will not stay in the SVC for long, we suggest that you rename them to more meaningful names so that they do not get confused with other MDisks that are used by other activities. Also, we create the storage pools to hold our new MDisks, which is shown in Example 6-30 on page 299.
298
IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name mdpalau_ivd mdisk32 IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_Palauivd -ext 512 MDisk Group, id [9], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_Palauivd -ext 512 CMMVC5758E Object name already exists. IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning easy_tier easy_tier_status 8 MD_palauVD online 3 2 24.0GB 512 7.0GB 17.00GB 17.00GB 17.00GB 70 0 auto inactive 9 MDG_Palauivd online 0 0 0 512 0 0.00MB 0.00MB 0.00MB 0 0 auto inactive IBM_2145:ITSO-CLS1:admin>
Our SVC environment is now ready for the volume migration to image mode volumes.
IBM_2145:ITSO-CLS1:admin>svctask migratetoimage -vdisk palau_SANB -mdisk mdpalau_ivd -mdiskgrp MD_palauVD IBM_2145:ITSO-CLS1:admin>svctask migratetoimage -vdisk palau_Data -mdisk mdpalau_ivd1 -mdiskgrp MD_palauVD IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 28 palau-md1 online managed 8 MD_palauVD 8.0GB 0000000000000010 DS4500 600a0b8000174233000000b9487778ab00000000000000000000000000000000 29 palau-md2 online managed 8 MD_palauVD 8.0GB 0000000000000011 DS4500 600a0b80001744310000010f48776bae00000000000000000000000000000000 30 palau-md3 online managed 8 MD_palauVD 8.0GB 0000000000000012 DS4500 600a0b8000174233000000bb487778d900000000000000000000000000000000 31 mdpalau_ivd1 online image 8 MD_palauVD 6.0GB 0000000000000013 DS4500 600a0b8000174233000000bd4877890f00000000000000000000000000000000 32 mdpalau_ivd online image 8 MD_palauVD 12.5GB 0000000000000014 DS4500 600a0b80001744310000011048777bda00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin>svcinfo lsmigrate migrate_type Migrate_to_Image progress 4
299
migrate_source_vdisk_index 29 migrate_target_mdisk_index 32 migrate_target_mdisk_grp 8 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type Migrate_to_Image progress 30 migrate_source_vdisk_index 30 migrate_target_mdisk_index 31 migrate_target_mdisk_grp 8 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS1:admin> During the migration, our Linux server is unaware that its data is being physically moved between storage subsystems. After the migration has completed, the image mode volumes are ready to be removed from the Linux server, and the real LUNs can be mapped and masked directly to the host by using the storage subsystems tool.
300
rediscover the disks later). It is possible to tell the Linux SCSI subsystem to rescan for new disks without requiring you to unload the HBA driver; however, we do not provide these details here. 3. Remove the volumes from the host by using the svctask rmvdiskhostmap command (Example 6-32). To double-check that you have removed the volumes, use the svcinfo lshostvdiskmap command, which shows that these disks are no longer mapped to the Linux server.
Example 6-32 Remove the volumes from the host
IBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Palau palau_SANB IBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Palau palau_Data IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap Palau IBM_2145:ITSO-CLS1:admin>
4. Remove the volumes from the SVC by using the svctask rmvdisk command. This step makes them unmanaged, as seen in Example 6-33. Cached data: When you run the svctask rmvdisk command, the SVC will first double-check that there is no outstanding dirty cached data for the volume that is being removed. If there is still uncommitted cached data, the command fails with the following error message: CMMVC6212E The command failed because data in the cache has not been committed to disk You will have to wait for this cached data to be committed to the underlying storage subsystem before you can remove the volume. The SVC will automatically destage uncommitted cached data two minutes after the last write activity for the volume. How much data there is to destage, and how busy the I/O subsystem is, determine how long this command takes to complete. You can check if the volume has uncommitted data in the cache by using the command svcinfo lsvdisk <VDISKNAME> and checking the fast_write_state attribute. This attribute has the following meanings: empty not_empty corrupt No modified data exists in the cache. Modified data might exist in the cache. Modified data might have existed in the cache, but any data has been lost.
IBM_2145:ITSO-CLS1:admin>svctask rmvdisk palau_SANB IBM_2145:ITSO-CLS1:admin>svctask rmvdisk palau_Data IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 31 mdpalau_ivd1 online unmanaged 6.0GB 0000000000000013 DS4500 600a0b8000174233000000bd4877890f00000000000000000000000000000000 32 mdpalau_ivd online unmanaged 12.5GB 0000000000000014 DS4500 600a0b80001744310000011048777bda00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin>
301
5. Using Storage Manager (our storage subsystem management tool), unmap and unmask the disks from the SVC back to the Linux server. Important: If one of the disks is used to boot your Linux server, you must make sure that it is presented back to the host as SCSI ID 0 so that the FC adapter BIOS finds that disk during its initialization. 6. Power on your host server and enter your FC HBA BIOS before booting the OS. Make sure that you change the boot configuration so that it points to the SVC. In our example, we performed the following steps on a QLogic HBA: a. Pressed Ctrl+Q to enter the HBA BIOS. b. Opened Configuration Settings. c. Opened Selectable Boot Settings. d. Changed the entry from the SVC to your storage subsystem LUN with SCSI ID 0. e. Exited the menu and saved the changes. Important: This is the last step that you can perform and still safely back out everything that you have done so far. Up to this point, you can reverse all of the actions that you have performed so far to get the server back online without data loss: Remap and remask the LUNs back to the SVC. Run the svctask detectmdisk command to rediscover the MDisks. Recreate the volumes with the svctask mkvdisk command. Remap the volumes back to the server with the svctask mkvdiskhostmap command. After you start the next step, you might not be able to turn back without the risk of data loss. 7. We now restart the Linux server. If all of the zoning and LUN masking and mapping were done successfully, the Linux server boots as though nothing has happened. However, if you only moved the application LUN to the SVC and left your Linux server running, you must follow these steps to see the new volume: a. Load your HBA driver with the modprobe DRIVER_NAME command. If you did not (and cannot) unload your HBA driver, you can issue commands to the kernel to rescan the SCSI bus to see the new volumes (describing these details is beyond the scope of this book). b. Check your syslog and verify that the kernel found the new volumes. On Red Hat Enterprise Linux, the syslog is stored in the /var/log/messages directory. c. If your application and data are on an LVM volume, run the vgscan command to rediscover the VG, and then, run the vgchange -a y VOLUME_GROUP command to activate the VG. 8. Mount your file systems with the mount /MOUNT_POINT command (Example 6-34 on page 303). The df output shows that all of the disks are available again.
302
[root@Palau ~]# mount /dev/dm-2 /data [root@Palau ~]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 10093752 1938124 7634620 21% / /dev/sda1 101086 12054 83813 13% /boot tmpfs 1033496 0 1033496 0% /dev/shm /dev/dm-2 5160576 158160 4740272 4% /data [root@Palau ~]# 9. You are ready to start your application. 10.Finally, to make sure that the MDisks are removed from the SVC, run the svctask detectmdisk command. The MDisks will first be discovered as offline, and then they will automatically be removed when the SVC determines that there are no volumes associated with these MDisks.
303
Figure 6-80 shows our ESX server connected to the SAN infrastructure. It has two LUNs that are masked directly to it from our storage subsystem. Our ESX server represents a typical SAN environment with a host directly using LUNs that were created on a SAN storage subsystem, as shown in Figure 6-80: The ESX Servers HBA cards are zoned so that they are in the Green Zone with our storage subsystem. The two LUNs that have been defined on the storage subsystem and that use LUN masking are directly available to our ESX server.
304
Attention: Be extremely careful when connecting the SVC to your storage area network, because this requires you to connect cables to your SAN switches and to alter your switch zone configuration. Performing these activities incorrectly can render your SAN inoperable, so make sure that you fully understand the effect of your actions. You must perform these tasks to connect the SVC to your SAN fabric: Assemble your SVC components (nodes, uninterruptible power supply unit, SSPC), cable the SVC correctly, power the SVC on, and verify that the SVC is visible on your SAN area network. Create and configure your SVC cluster. Create these additional zones: An SVC node zone (the Black Zone in our picture on Example 6-57 on page 327). A storage zone (our Red Zone). A host zone (our Blue Zone). For more detailed information about how to configure the zones in the correct way, see Chapter 3, Planning and configuration on page 67. Figure 6-81 shows the environment that we set up.
305
IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_Nile_VM -ext 512 MDisk Group, id [3], successfully created
Figure 6-82 Obtain your WWN using the VMware Management Console
Use the svcinfo lshbaportcandidate command on the SVC to list all of the WWNs, which have not yet been allocated to a host, that the SVC can see on the SAN fabric. Example 6-36 on page 307 shows the output of the nodes that it found on our SAN fabric. (If the port did not show up, it indicates a zone configuration problem.) 306
IBM_2145:ITSO-CLS1:admin>svcinfo lshbaportcandidate id 210000E08B89B8C0 210000E08B892BCD 210000E08B0548BC 210000E08B0541BC 210000E08B89CCC2 IBM_2145:ITSO-CLS1:admin> After verifying that the SVC can see our host, we create the host entry and assign the WWN to this entry. Example 6-37 shows these commands.
Example 6-37 Create the host entry
IBM_2145:ITSO-CLS1:admin>svctask mkhost -name Nile -hbawwpn 210000E08B89B8C0:210000E08B892BCD Host, id [1], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lshost Nile id 1 name Nile port_count 2 type generic mask 1111 iogrp_count 4 WWPN 210000E08B892BCD node_logged_in_count 4 state active WWPN 210000E08B89B8C0 node_logged_in_count 4 state active IBM_2145:ITSO-CLS1:admin>
IBM_2145:ITSO-CLS1:admin>svcinfo lscontroller id controller_name ctrl_s/n product_id_low product_id_high 0 DS4500 1742-900 1 DS4700 1814 FAStT
307
If you also use a DS4000 family storage subsystem, Storage Manager provides the LUN serial numbers. Right-click your logical drive, and choose Properties. The following figures show our serial numbers. Figure 6-83 shows disk serial number VM_W2k3.
308
We are ready to move the ownership of the disks to the SVC, discover them as MDisks, and give them back to the host as volumes.
The virtual machines are located on these LUNs. Therefore, to move these LUNs under the control of the SVC, we do not need to reboot the entire ESX server, but we do have to stop and suspend all VMware guests that are using these LUNs.
2. Identify all of the VMware guests that are using this LUN and shut them down. One way to identify them is to highlight the virtual machine and open the Summary Tab. The datapool that is used is displayed under Datastore. Figure 6-87 on page 310 shows a Linux virtual machine using the datastore named SLES_Costa_Rica.
309
Figure 6-87 Identify the LUNs that are used by virtual machines
3. If you have several ESX hosts, also check the other ESX hosts to make sure that there is no guest operating system that is running and using this datastore. 4. Repeat steps 1 to 3 for every datastore that you want to migrate. 5. After the guests are suspended, we use Storage Manager (our storage subsystem management tool) to unmap and unmask the disks from the ESX server and to remap and to remask the disks to the SVC. 6. From the SVC, discover the new disks with the svctask detectmdisk command. The disks will be discovered and named as mdiskN, where N is the next available MDisk number (starting from 0). Example 6-39 shows the commands that we used to discover our MDisks and to verify that we have the correct MDisks.
Example 6-39 Discover the new MDisks
IBM_2145:ITSO-CLS1:admin>svctask detectmdisk IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 21 mdisk21 online unmanaged 60.0GB 0000000000000008 DS4700 600a0b800026b282000041ca486d14a500000000000000000000000000000000 22 mdisk22 online unmanaged 70.0GB 0000000000000009 DS4700 600a0b80002904de0000447a486d14cd00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin>
310
Important: Match your discovered MDisk serial numbers (UID on the svcinfo lsmdisk command task display) with the serial number that you obtained earlier (in Figure 6-83 and Figure 6-84 on page 308). 7. After we have verified that we have the correct MDisks, we rename them to avoid confusion in the future when we perform other MDisk-related tasks; see Example 6-40.
Example 6-40 Rename the MDisks
IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name ESX_W2k3 mdisk22 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name ESX_SLES mdisk21 IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk 21 ESX_SLES online unmanaged 60.0GB 0000000000000008 DS4700 600a0b800026b282000041ca486d14a500000000000000000000000000000000 22 ESX_W2k3 online unmanaged 70.0GB 0000000000000009 DS4700 600a0b80002904de0000447a486d14cd00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin> 8. We create our image mode volumes with the svctask mkvdisk command; see Example 6-41. Using the parameter -vtype image ensures that it will create image mode volumes, which means that the virtualized disks will have the exact same layout as though they were not virtualized.
Example 6-41 Create the image mode volumes
IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp MDG_Nile_VM -iogrp 0 -vtype image -mdisk ESX_W2k3 -name ESX_W2k3_IVD Virtual Disk, id [29], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp MDG_Nile_VM -iogrp 0 -vtype image -mdisk ESX_SLES -name ESX_SLES_IVD Virtual Disk, id [30], successfully created IBM_2145:ITSO-CLS1:admin> 9. Finally, we can map the new image mode volumes to the host. Use the same SCSI LUN IDs as on the storage subsystem for the mapping; see Example 6-42.
Example 6-42 Map the volumes to the host
IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Nile -scsi 0 ESX_SLES_IVD Virtual Disk to Host map, id [0], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Nile -scsi 1 ESX_W2k3_IVD Virtual Disk to Host map, id [1], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID 1 Nile 0 30 ESX_SLES_IVD 210000E08B892BCD 60050768018301BF280000000000002A 1 Nile 1 29 ESX_W2k3_IVD 210000E08B892BCD 60050768018301BF2800000000000029
311
10.Using the VMware management console, rescan to discover the new volume. Open the configuration tab, select Storage Adapters, and click Rescan. During the rescan you can receive geometry errors when ESX discovers that the old disk has disappeared. Your volume will appear with the new vmhba devices. 11.We are ready to restart the VMware guests again. At this point, you have migrated the VMware LUNs successfully to the SVC.
312
We also need a Green Zone for our host to use when we are ready for it to directly access the disk, after it has been removed from the SVC. We assume that you have created the necessary zones. In our environment, we have performed these tasks: Created three LUNs on another storage subsystem and mapped it to the SVC Discovered them as MDisks Created a new storage pool Renamed these LUNs to more meaningful names Put all these MDisks into this storage pool You can see the output of our commands in Example 6-43. Example 6-43 Create a new storage pool IBM_2145:ITSO-CLS1:admin>svctask detectmdisk IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 21 ESX_SLES online image 3 MDG_Nile_VM 60.0GB 0000000000000008 DS4700 600a0b800026b282000041ca486d14a500000000000000000000000000000000 22 ESX_W2k3 online image 3 MDG_Nile_VM 70.0GB 0000000000000009 DS4700 600a0b80002904de0000447a486d14cd00000000000000000000000000000000 23 mdisk23 online unmanaged 55.0GB 000000000000000D DS4500 600a0b8000174233000000b4486d250300000000000000000000000000000000 24 mdisk24 online unmanaged 55.0GB 000000000000000E DS4500 600a0b800017443100000108486d182c00000000000000000000000000000000 25 mdisk25 online unmanaged 55.0GB 000000000000000F DS4500 600a0b8000174233000000b5486d255b00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_ESX_VD -ext 512 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name IBMESX-MD1 mdisk23 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name IBMESX-MD2 mdisk24 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name IBMESX-MD3 mdisk25 IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk IBMESX-MD1 MDG_ESX_VD IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk IBMESX-MD2 MDG_ESX_VD IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk IBMESX-MD3 MDG_ESX_VD IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 21 ESX_SLES online image 3 MDG_Nile_VM 60.0GB 0000000000000008 DS4700 600a0b800026b282000041ca486d14a500000000000000000000000000000000 22 ESX_W2k3 online image 3 MDG_Nile_VM 70.0GB 0000000000000009 DS4700 600a0b80002904de0000447a486d14cd00000000000000000000000000000000 23 IBMESX-MD1 online managed 4 MDG_ESX_VD 55.0GB 000000000000000D DS4500 600a0b8000174233000000b4486d250300000000000000000000000000000000
Chapter 6. Data migration
313
24 IBMESX-MD2 online managed MDG_ESX_VD 55.0GB 000000000000000E DS4500 600a0b800017443100000108486d182c00000000000000000000000000000000 25 IBMESX-MD3 online managed MDG_ESX_VD 55.0GB 000000000000000F DS4500 600a0b8000174233000000b5486d255b00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin>
IBM_2145:ITSO-CLS1:admin>svctask migratevdisk -vdisk ESX_SLES_IVD -mdiskgrp MDG_ESX_VD IBM_2145:ITSO-CLS1:admin>svctask migratevdisk -vdisk ESX_W2k3_IVD -mdiskgrp MDG_ESX_VD IBM_2145:ITSO-CLS1:admin>svcinfo lsmigrate migrate_type MDisk_Group_Migration progress 0 migrate_source_vdisk_index 30 migrate_target_mdisk_grp 4 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type MDisk_Group_Migration progress 0 migrate_source_vdisk_index 29 migrate_target_mdisk_grp 4 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS1:admin>svcinfo lsmigrate migrate_type MDisk_Group_Migration progress 1 migrate_source_vdisk_index 30 migrate_target_mdisk_grp 4 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type MDisk_Group_Migration progress 0 migrate_source_vdisk_index 29 migrate_target_mdisk_grp 4 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning
314
3 MDG_Nile_VM 130.0GB 512 130.00GB 100 4 MDG_ESX_VD 165.0GB 512 0.00MB 0 IBM_2145:ITSO-CLS1:admin>
2 130.00GB 3 0.00MB
2 130.00GB 0 0.00MB
If you compare the svcinfo lsmdiskgrp output after the migration, as shown in Example 6-45, you can see that all of the virtual capacity has now been moved from the old storage pool (MDG_Nile_VM) to the new storage pool (MDG_ESX_VD). The mdisk_count column shows that the capacity is now spread over three MDisks.
Example 6-45 List MDisk group
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp id name status capacity extent_size free_capacity real_capacity overallocation warning 3 MDG_Nile_VM online 130.0GB 512 130.0GB 0.00MB 0 0 4 MDG_ESX_VD online 165.0GB 512 35.0GB 130.00GB 78 0 IBM_2145:ITSO-CLS1:admin>
The migration to the SVC is complete. You can remove the original MDisks from the SVC and remove these LUNs from the storage subsystem. If these LUNs are the last LUNs that were used on our storage subsystem, we can remove it from our SAN fabric.
315
There are also other preparatory activities that we can perform before we shut down the host and reconfigure the LUN masking and mapping. This section describes those activities. In our example, we will move volumes that are located on a DS4500 to image mode volumes that are located on a DS4700. If you are moving the data to a new storage subsystem, it is assumed that this storage subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches. Your environment must look similar to our environment, as described in Adding a new storage subsystem to SVC on page 312 and Make fabric zone changes on page 312.
IBM_2145:ITSO-CLS1:admin>svctask detectmdisk IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 23 IBMESX-MD1 online managed MDG_ESX_VD 55.0GB 000000000000000D DS4500 600a0b8000174233000000b4486d250300000000000000000000000000000000 24 IBMESX-MD2 online managed MDG_ESX_VD 55.0GB 000000000000000E DS4500 600a0b800017443100000108486d182c00000000000000000000000000000000 25 IBMESX-MD3 online managed MDG_ESX_VD 55.0GB 000000000000000F DS4500 600a0b8000174233000000b5486d255b00000000000000000000000000000000 26 mdisk26 online unmanaged 120.0GB 000000000000000A DS4700 600a0b800026b282000041f0486e210100000000000000000000000000000000 27 mdisk27 online unmanaged 100.0GB 000000000000000B DS4700 600a0b800026b282000041e3486e20cf00000000000000000000000000000000 4
Even though the MDisks will not stay in the SVC for long, we suggest that you rename them to more meaningful names so that they do not get confused with other MDisks being used by other activities. We also the storage pools to hold our new MDisks. Example 6-47 shows these tasks.
Example 6-47 Rename the MDisks
IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name ESX_IVD_SLES mdisk26 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name ESX_IVD_W2K3 mdisk27 IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_IVD_ESX -ext 512 MDisk Group, id [5], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning
316
4 MDG_ESX_VD online 3 165.0GB 512 35.0GB 130.00GB 130.00GB 78 0 5 MDG_IVD_ESX online 0 512 0 0.00MB 0.00MB 0 IBM_2145:ITSO-CLS1:admin>
2 130.00GB 0 0.00MB 0 0
Our SVC environment is ready for the volume migration to image mode volumes.
IBM_2145:ITSO-CLS1:admin>svctask migratetoimage -vdisk ESX_SLES_IVD -mdisk ESX_IVD_SLES -mdiskgrp MDG_IVD_ESX IBM_2145:ITSO-CLS1:admin>svctask migratetoimage -vdisk ESX_W2k3_IVD -mdisk ESX_IVD_W2K3 -mdiskgrp MDG_IVD_ESX IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 23 IBMESX-MD1 online managed 4 MDG_ESX_VD 55.0GB 000000000000000D DS4500 600a0b8000174233000000b4486d250300000000000000000000000000000000 24 IBMESX-MD2 online managed 4 MDG_ESX_VD 55.0GB 000000000000000E DS4500 600a0b800017443100000108486d182c00000000000000000000000000000000 25 IBMESX-MD3 online managed 4 MDG_ESX_VD 55.0GB 000000000000000F DS4500 600a0b8000174233000000b5486d255b00000000000000000000000000000000 26 ESX_IVD_SLES online image 5 MDG_IVD_ESX 120.0GB 000000000000000A DS4700 600a0b800026b282000041f0486e210100000000000000000000000000000000 27 ESX_IVD_W2K3 online image 5 MDG_IVD_ESX 100.0GB 000000000000000B DS4700 600a0b800026b282000041e3486e20cf00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin>
During the migration, our ESX server is unaware that its data is being physically moved between storage subsystems. We can continue to run and continue to use the virtual machines that are running on the server. You can check the migration status with the svcinfo lsmigrate command, as shown in Example 6-49 on page 318.
317
IBM_2145:ITSO-CLS1:admin>svcinfo lsmigrate migrate_type Migrate_to_Image progress 2 migrate_source_vdisk_index 29 migrate_target_mdisk_index 27 migrate_target_mdisk_grp 5 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type Migrate_to_Image progress 12 migrate_source_vdisk_index 30 migrate_target_mdisk_index 26 migrate_target_mdisk_grp 5 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS1:admin> After the migration has completed, the image mode volumes are ready to be removed from the ESX server, and the real LUNs can be mapped and masked directly to the host using the storage subsystems tool.
IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap id name SCSI_id vdisk_id wwpn vdisk_UID 1 Nile 0 30 210000E08B892BCD 60050768018301BF280000000000002A 1 Nile 1 29 210000E08B892BCD 60050768018301BF2800000000000029 IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk id name IO_group_id mdisk_grp_id mdisk_grp_name capacity FC_name RC_id RC_name copy_count 0 vdisk_A 0 2 MDG_Image 36.0GB 29 ESX_W2k3_IVD 0 4 MDG_ESX_VD 70.0GB striped 60050768018301BF2800000000000029 0 318
IBM System Storage SAN Volume Controller V6.3
IO_group_name status type FC_id vdisk_UID fc_map_count io_grp0 image io_grp0 1 online online
io_grp0 1
online
2. Shut down and suspend all guests using the LUNs. You can use the same method that is used in Moving VMware guest LUNs on page 309 to identify the guests that are using this LUN. 3. Remove the volumes from the host by using the svctask rmvdiskhostmap command (Example 6-51). To double-check that the volumes have been removed use the svcinfo lshostvdiskmap command, which shows that these volumes are no longer mapped to the ESX server.
Example 6-51 Remove the volumes from the host
IBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Nile ESX_W2k3_IVD IBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Nile ESX_SLES_IVD 4. Remove the volumes from the SVC by using the svctask rmvdisk command, which makes the MDisks unmanaged, as shown in Example 6-52. Cached data: When you run the svctask rmvdisk command, the SVC first double-checks that there is no outstanding dirty cached data for the volume that is being removed. If there is still uncommitted cached data, the command fails with this error message: CMMVC6212E The command failed because data in the cache has not been committed to disk You have to wait for this cached data to be committed to the underlying storage subsystem before you can remove the volume. The SVC will automatically destage uncommitted cached data two minutes after the last write activity for the volume. How much data there is to destage, and how busy the I/O subsystem is, determine how long this command takes to complete. You can check if the volume has uncommitted data in the cache by using the svcinfo lsvdisk <VDISKNAME> command and checking the fast_write_state attribute. This attribute has the following meanings: empty not_empty corrupt No modified data exists in the cache. Modified data might exist in the cache. Modified data might have existed in the cache, but the data has been lost.
IBM_2145:ITSO-CLS1:admin>svctask rmvdisk ESX_W2k3_IVD IBM_2145:ITSO-CLS1:admin>svctask rmvdisk ESX_SLES_IVD IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 26 ESX_IVD_SLES online unmanaged 120.0GB 000000000000000A DS4700 600a0b800026b282000041f0486e210100000000000000000000000000000000
Chapter 6. Data migration
319
27 ESX_IVD_W2K3 online unmanaged 100.0GB 000000000000000B DS4700 600a0b800026b282000041e3486e20cf00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin> 5. Using Storage Manager (our storage subsystem management tool), unmap and unmask the disks from the SVC back to the ESX server. Remember that in Example 6-50 on page 318, we recorded the SCSI LUN IDs. To map your LUNs on the storage subsystem, use the same SCSI LUN IDs that you used in the SVC. Important: This is the last step that you can perform and still safely back out of everything you have done so far. Up to this point, you can reverse all of the actions that you have performed so far to get the server back online without data loss: Remap and remask the LUNs back to the SVC. Run the svctask detectmdisk command to rediscover the MDisks. Recreate the volumes with the svctask mkvdisk command. Remap the volumes back to the server with the svctask mkvdiskhostmap command. After you start the next step, you might not be able to turn back without the risk of data loss. 6. Using the VMware management console, rescan to discover the new volume. Figure 6-89 shows the view before the rescan. Figure 6-90 on page 321 shows the view after the rescan. Note that the size of the LUN has changed, because we have moved to another LUN on another storage subsystem.
320
During the rescan, you can receive geometry errors when ESX discovers that the old disk has disappeared. Your volume will appear with a new vmhba address, and VMware will recognize it as our VMWARE-GUESTS disk. 7. We are now ready to restart the VMware guests. 8. Finally, to make sure that the MDisks are removed from the SVC, run the svctask detectmdisk command. The MDisks are discovered as offline and then automatically removed when the SVC determines that there are no volumes associated with these MDisks.
321
type of data that is stored on those LUNs, taking into account availability, performance, and redundancy. We describe this step in 6.8.4, Migrating image mode volumes to volumes on page 331. Move your AIX servers LUNs back to image mode volumes, so that they can be remapped and remasked directly back to the AIX server. This step starts in 6.8.5, Preparing to migrate from the SVC on page 333. Use these activities individually or together to migrate your AIX servers LUNs from one storage subsystem to another storage subsystem by using the SVC as your migration tool. If you do not use all three activities, you can introduce or remove the SVC from your environment. The only downtime that is required for these activities is the time that it takes you to remask and remap the LUNs between the storage subsystems and your SVC. We show our AIX environment in Figure 6-91.
SAN
Green Zone
Figure 6-91 shows our AIX server connected to our SAN infrastructure. It has two LUNs (hdisk3 and hdisk4) that are masked directly to it from our storage subsystem. The hdisk3 disk makes up the itsoaixvg LVM group, and the hdisk4 disk makes up the itsoaixvg1 LVM group, as shown in Example 6-53 on page 323.
322
#lsdev hdisk0 hdisk1 hdisk2 hdisk3 hdisk4 #lspv hdisk0 hdisk1 hdisk2 hdisk3 hdisk4 #
16 Bit LVD SCSI Disk Drive 16 Bit LVD SCSI Disk Drive 16 Bit LVD SCSI Disk Drive 1814 DS4700 Disk Array Device 1814 DS4700 Disk Array Device rootvg rootvg rootvg itsoaixvg itsoaixvg1 active active active active active
Our AIX server represents a typical SAN environment with a host directly using LUNs that were created on a SAN storage subsystem, as shown in Figure 6-91 on page 322: The AIX servers HBA cards are zoned so that they are in the Green (dotted line) Zone with our storage subsystem. The two LUNs, hdisk3 and hdisk4, have been defined on the storage subsystem. Using LUN masking, they are directly available to our AIX server.
323
SAN
324
IBM_2145:ITSO-CLS2:admin>svctask mkmdiskgrp -name aix_imgmdg -ext 512 MDisk Group, id [7], successfully created IBM_2145:ITSO-CLS2:admin>svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning 7 aix_imgmdg online 512 0 0.00MB 0 IBM_2145:ITSO-CLS2:admin> 0 0.00MB 0 0.00MB 0 0
#lsdev -Ccadapter|grep fcs fcs0 Available 1Z-08 FC Adapter fcs1 Available 1D-08 FC Adapter #lscfg -vpl fcs0 fcs0 U0.1-P2-I4/Q1 FC Adapter Part Number.................00P4494 EC Level....................A Serial Number...............1E3120A68D Manufacturer................001E Device Specific.(CC)........2765 FRU Number.................. 00P4495 Network Address.............10000000C932A7FB ROS Level and ID............02C03951 Device Specific.(Z0)........2002606D Device Specific.(Z1)........00000000 Device Specific.(Z2)........00000000 Device Specific.(Z3)........03000909 Device Specific.(Z4)........FF401210 Device Specific.(Z5)........02C03951 Device Specific.(Z6)........06433951 Device Specific.(Z7)........07433951 Device Specific.(Z8)........20000000C932A7FB Device Specific.(Z9)........CS3.91A1 Device Specific.(ZA)........C1D3.91A1 Device Specific.(ZB)........C2D3.91A1 Device Specific.(YL)........U0.1-P2-I4/Q1
PLATFORM SPECIFIC
325
Name: fibre-channel Model: LP9002 Node: fibre-channel@1 Device Type: fcp Physical Location: U0.1-P2-I4/Q1 #lscfg -vpl fcs1 fcs1 U0.1-P2-I5/Q1 FC Adapter Part Number.................00P4494 EC Level....................A Serial Number...............1E3120A67B Manufacturer................001E Device Specific.(CC)........2765 FRU Number.................. 00P4495 Network Address.............10000000C932A800 ROS Level and ID............02C03891 Device Specific.(Z0)........2002606D Device Specific.(Z1)........00000000 Device Specific.(Z2)........00000000 Device Specific.(Z3)........02000909 Device Specific.(Z4)........FF401050 Device Specific.(Z5)........02C03891 Device Specific.(Z6)........06433891 Device Specific.(Z7)........07433891 Device Specific.(Z8)........20000000C932A800 Device Specific.(Z9)........CS3.82A1 Device Specific.(ZA)........C1D3.82A1 Device Specific.(ZB)........C2D3.82A1 Device Specific.(YL)........U0.1-P2-I5/Q1
PLATFORM SPECIFIC Name: fibre-channel Model: LP9000 Node: fibre-channel@1 Device Type: fcp Physical Location: U0.1-P2-I5/Q1 ## The svcinfo lshbaportcandidate command on the SVC lists all of the WWNs, which have not yet been allocated to a host, that the SVC can see on the SAN fabric. Example 6-56 shows the output of the nodes that it found in our SAN fabric. (If the port did not show up, it indicates a zone configuration problem.)
Example 6-56 Add the host to the SVC
326
After verifying that the SVC can see our host (Kanaga), we create the host entry and assign the WWN to this entry, as shown with the commands in Example 6-57.
Example 6-57 Create the host entry
IBM_2145:ITSO-CLS2:admin>svctask mkhost -name Kanaga -hbawwpn 10000000C932A7FB:10000000C932A800 Host, id [5], successfully created IBM_2145:ITSO-CLS2:admin>svcinfo lshost Kanaga id 5 name Kanaga port_count 2 type generic mask 1111 iogrp_count 4 WWPN 10000000C932A800 node_logged_in_count 2 state inactive WWPN 10000000C932A7FB node_logged_in_count 2 state inactive IBM_2145:ITSO-CLS2:admin>
IBM_2145:ITSO-CLS2:admin>svcinfo lscontroller id controller_name ctrl_s/n vendor_id product_id_low product_id_high 0 DS4500 IBM 1742-900 1 DS4700 IBM 1814 IBM_2145:ITSO-CLS2:admin> Names: The svctask chcontroller command enables you to change the discovered storage subsystem name in SVC. In complex SANs, we suggest that you rename your storage subsystem to a more meaningful name.
327
We are ready to move the ownership of the disks to the SVC, discover them as MDisks, and give them back to the host as volumes.
328
#varyoffvg itsoaixvg #varyoffvg itsoaixvg1 #lsvg rootvg itsoaixvg itsoaixvg1 #lsvg -o rootvg 3. Using Storage Manager (our storage subsystem management tool), we can unmap and unmask the disks from the AIX server and remap and remask the disks to the SVC. 4. From the SVC, discover the new disks with the svctask detectmdisk command. The disks will be discovered and named mdiskN, where N is the next available mdisk number (starting from 0). Example 6-60 shows the commands that we used to discover our MDisks and to verify that we have the correct MDisks.
Example 6-60 Discover the new MDisks
status capacity
mode ctrl_LUN_#
329
25 mdisk25 online unmanaged 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c00000000000000000000000000000000 IBM_2145:ITSO-CLS2:admin> Important: Match your discovered MDisk serial numbers (UID on the svcinfo lsmdisk command task display) with the serial number that you discovered earlier (in Figure 6-93 and Figure 6-94 on page 328). 5. After we have verified that we have the correct MDisks, we rename them to avoid confusion in the future when we perform other MDisk-related tasks (Example 6-61).
Example 6-61 Rename the MDisks
IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name Kanaga_AIX mdisk24 IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name Kanaga_AIX1 mdisk25 IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 5.0GB 0000000000000008 DS4700 600a0b800026b282000043224874f41900000000000000000000000000000000 25 Kanaga_AIX1 online unmanaged 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c00000000000000000000000000000000 IBM_2145:ITSO-CLS2:admin> 6. We create our image mode volumes with the svctask mkvdisk command and the option -vtype image (Example 6-62). This command virtualizes the disks in the exact same layout as though they were not virtualized.
Example 6-62 Create the image mode volumes
IBM_2145:ITSO-CLS2:admin>svctask mkvdisk -mdiskgrp aix_imgmdg -iogrp 0 -vtype image -mdisk Kanaga_AIX -name IVD_Kanaga Virtual Disk, id [8], successfully created IBM_2145:ITSO-CLS2:admin>svctask mkvdisk -mdiskgrp aix_imgmdg -iogrp 0 -vtype image -mdisk Kanaga_AIX1 -name IVD_Kanaga1 Virtual Disk, id [9], successfully created IBM_2145:ITSO-CLS2:admin> 7. Finally, we can map the new image mode volumes to the host (Example 6-63).
Example 6-63 Map the volumes to the host
IBM_2145:ITSO-CLS2:admin>svctask mkvdiskhostmap -host Kanaga IVD_Kanaga Virtual Disk to Host map, id [0], successfully created IBM_2145:ITSO-CLS2:admin>svctask mkvdiskhostmap -host Kanaga IVD_Kanaga1 Virtual Disk to Host map, id [1], successfully created IBM_2145:ITSO-CLS2:admin> FlashCopy: While the application is in a quiescent state, you can choose to use FlashCopy to copy the new image volumes onto other volumes. You do not need to wait until the FlashCopy process has completed before starting your application.
330
Now, we are ready to perform the following steps to put the image mode volumes online: 1. Remove the old disk definitions, if you have not done so already. 2. Run the cfgmgr -vs command to rediscover the available LUNs. 3. If your application and data are on an LVM volume, rediscover the VG, and then, run the varyonvg VOLUME_GROUP command to activate the VG. 4. Mount your file systems with the mount /MOUNT_POINT command. 5. You are ready to start your application.
IBM_2145:ITSO-CLS2:admin>svctask mkmdiskgrp -name aix_vd -ext 512 IBM_2145:ITSO-CLS2:admin>svctask detectmdisk IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 24 Kanaga_AIX online image 7 aix_imgmdg 5.0GB 0000000000000008 DS4700 600a0b800026b282000043224874f41900000000000000000000000000000000 25 Kanaga_AIX1 online image 7 aix_imgmdg 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c00000000000000000000000000000000 26 mdisk26 online unmanaged 6.0GB 000000000000000A DS4700 600a0b800026b2820000439c48751ddc00000000000000000000000000000000 27 mdisk27 online unmanaged 6.0GB 000000000000000B DS4700 600a0b800026b2820000438448751da900000000000000000000000000000000 28 mdisk28 online unmanaged 6.0GB 000000000000000C DS4700 600a0b800026b2820000439048751dc200000000000000000000000000000000 IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name aix_vd0 mdisk26 IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name aix_vd1 mdisk27 IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name aix_vd2 mdisk28 IBM_2145:ITSO-CLS2:admin> IBM_2145:ITSO-CLS2:admin>svctask addmdisk -mdisk aix_vd0 aix_vd IBM_2145:ITSO-CLS2:admin>svctask addmdisk -mdisk aix_vd1 aix_vd IBM_2145:ITSO-CLS2:admin>svctask addmdisk -mdisk aix_vd2 aix_vd
Chapter 6. Data migration
331
IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 24 Kanaga_AIX online image 7 aix_imgmdg 5.0GB 0000000000000008 DS4700 600a0b800026b282000043224874f41900000000000000000000000000000000 25 Kanaga_AIX1 online image 7 aix_imgmdg 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c00000000000000000000000000000000 26 aix_vd0 online managed 6 aix_vd 6.0GB 000000000000000A DS4700 600a0b800026b2820000439c48751ddc00000000000000000000000000000000 27 aix_vd1 online managed 6 aix_vd 6.0GB 000000000000000B DS4700 600a0b800026b2820000438448751da900000000000000000000000000000000 28 aix_vd2 online managed 6 aix_vd 6.0GB 000000000000000C DS4700 600a0b800026b2820000439048751dc200000000000000000000000000000000 IBM_2145:ITSO-CLS2:admin>
IBM_2145:ITSO-CLS2:admin>svctask migratevdisk -vdisk IVD_Kanaga -mdiskgrp aix_vd IBM_2145:ITSO-CLS2:admin>svctask migratevdisk -vdisk IVD_Kanaga1 -mdiskgrp aix_vd IBM_2145:ITSO-CLS2:admin>svcinfo lsmigrate migrate_type MDisk_Group_Migration progress 10 migrate_source_vdisk_index 8 migrate_target_mdisk_grp 6 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type MDisk_Group_Migration progress 0 migrate_source_vdisk_index 9 migrate_target_mdisk_grp 6 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS2:admin> After this task has completed, Example 6-66 on page 333 shows that the volumes are spread over three MDisks in the aix_vd storage pool. The old storage pool is empty. 332
IBM System Storage SAN Volume Controller V6.3
IBM_2145:ITSO-CLS2:admin>svcinfo lsmdiskgrp aix_vd id 6 name aix_vd status online mdisk_count 3 vdisk_count 2 capacity 18.0GB extent_size 512 free_capacity 5.0GB virtual_capacity 13.00GB used_capacity 13.00GB real_capacity 13.00GB overallocation 72 warning 0 IBM_2145:ITSO-CLS2:admin>svcinfo lsmdiskgrp aix_imgmdg id 7 name aix_imgmdg status online mdisk_count 2 vdisk_count 0 capacity 13.0GB extent_size 512 free_capacity 13.0GB virtual_capacity 0.00MB used_capacity 0.00MB real_capacity 0.00MB overallocation 0 warning 0 IBM_2145:ITSO-CLS2:admin> Our migration to the SVC is complete. You can remove the original MDisks from the SVC, and you can remove these LUNs from the storage subsystem. If these LUNs are the LUNs that were used last on our storage subsystem, we can remove it from our SAN fabric.
333
There are other preparatory activities to be performed before we shut down the host and reconfigure the LUN masking and mapping. This section covers those activities. If you are moving the data to a new storage subsystem, it is assumed that this storage subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches. Your environment must look similar to our environment, as shown in Figure 6-95.
SAN
334
IBM_2145:ITSO-CLS2:admin>svcinfo lscontroller id controller_name ctrl_s/n vendor_id product_id_low product_id_high 0 DS4500 IBM 1742-900 1 DS4700 IBM 1814 FAStT IBM_2145:ITSO-CLS2:admin>
IBM_2145:ITSO-CLS2:admin>svctask detectmdisk IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 24 Kanaga_AIX offline managed 7 aix_imgmdg 5.0GB 0000000000000008 DS4700 600a0b800026b282000043224874f41900000000000000000000000000000000 25 Kanaga_AIX1 offline managed 7 aix_imgmdg 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c00000000000000000000000000000000 26 aix_vd0 online managed 6 aix_vd 6.0GB 000000000000000A DS4700 600a0b800026b2820000439c48751ddc00000000000000000000000000000000 27 aix_vd1 online managed 6 aix_vd 6.0GB 000000000000000B DS4700 600a0b800026b2820000438448751da900000000000000000000000000000000 28 aix_vd2 online managed 6 aix_vd 6.0GB 000000000000000C DS4700 600a0b800026b2820000439048751dc200000000000000000000000000000000 29 mdisk29 online unmanaged 10.0GB 0000000000000010 DS4500 600a0b8000174233000000b84876512f00000000000000000000000000000000 30 mdisk30 online unmanaged 10.0GB 0000000000000011 DS4500 600a0b80001744310000010e4876444600000000000000000000000000000000 IBM_2145:ITSO-CLS2:admin> Even though the MDisks will not stay in the SVC for long, we suggest that you rename them to more meaningful names so that they do not get confused with other MDisks that are used by other activities. Also, we create the storage pools to hold our new MDisks, as shown in Example 6-69 on page 336.
335
IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name AIX_MIG mdisk29 IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name AIX_MIG1 mdisk30 IBM_2145:ITSO-CLS2:admin>svctask mkmdiskgrp -name KANAGA_AIXMIG -ext 512 MDisk Group, id [3], successfully created IBM_2145:ITSO-CLS2:admin>svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning 3 KANAGA_AIXMIG online 0 0 0 512 0 0.00MB 0.00MB 0.00MB 0 0 6 aix_vd online 3 2 18.0GB 512 5.0GB 13.00GB 13.00GB 13.00GB 72 0 7 aix_imgmdg offline 2 0 13.0GB 512 13.0GB 0.00MB 0.00MB 0.00MB 0 0 IBM_2145:ITSO-CLS2:admin>
At this point, our SVC environment is ready for the volume migration to image mode volumes.
IBM_2145:ITSO-CLS2:admin>svctask migratetoimage -vdisk IVD_Kanaga -mdisk AIX_MIG -mdiskgrp KANAGA_AIXMIG IBM_2145:ITSO-CLS2:admin>svctask migratetoimage -vdisk IVD_Kanaga1 -mdisk AIX_MIG1 -mdiskgrp KANAGA_AIXMIG IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 24 Kanaga_AIX offline managed 7 aix_imgmdg 5.0GB 0000000000000008 DS4700 600a0b800026b282000043224874f41900000000000000000000000000000000 25 Kanaga_AIX1 offline managed 7 aix_imgmdg 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c00000000000000000000000000000000 26 aix_vd0 online managed 6 aix_vd 6.0GB 000000000000000A DS4700 600a0b800026b2820000439c48751ddc00000000000000000000000000000000 27 aix_vd1 online managed 6 aix_vd 6.0GB 000000000000000B DS4700 600a0b800026b2820000438448751da900000000000000000000000000000000 28 aix_vd2 online managed 6 aix_vd 6.0GB 000000000000000C DS4700 600a0b800026b2820000439048751dc200000000000000000000000000000000
336
29 AIX_MIG online image KANAGA_AIXMIG 10.0GB 0000000000000010 DS4500 600a0b8000174233000000b84876512f00000000000000000000000000000000 30 AIX_MIG1 online image KANAGA_AIXMIG 10.0GB 0000000000000011 DS4500 600a0b80001744310000010e4876444600000000000000000000000000000000 IBM_2145:ITSO-CLS2:admin>svcinfo lsmigrate migrate_type Migrate_to_Image progress 50 migrate_source_vdisk_index 9 migrate_target_mdisk_index 30 migrate_target_mdisk_grp 3 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type Migrate_to_Image progress 50 migrate_source_vdisk_index 8 migrate_target_mdisk_index 29 migrate_target_mdisk_grp 3 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS2:admin>
During the migration, our AIX server is unaware that its data is being physically moved between storage subsystems. After the migration is complete, the image mode volumes are ready to be removed from the AIX server, and the real LUNs can be mapped and masked directly to the host by using the storage subsystems tool.
337
3. Remove the volumes from the host by using the svctask rmvdiskhostmap command (Example 6-71). To double-check that you have removed the volumes, use the svcinfo lshostvdiskmap command, which shows that these disks are no longer mapped to the AIX server.
Example 6-71 Remove the volumes from the host
IBM_2145:ITSO-CLS2:admin>svctask rmvdiskhostmap -host Kanaga IVD_Kanaga IBM_2145:ITSO-CLS2:admin>svctask rmvdiskhostmap -host Kanaga IVD_Kanaga1 IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Kanaga IBM_2145:ITSO-CLS2:admin> 4. Remove the volumes from the SVC by using the svctask rmvdisk command, which will make the MDisks unmanaged, as shown in Example 6-72. Cached data: When you run the svctask rmvdisk command, the SVC first double-checks that there is no outstanding dirty cached data for the volume being removed. If uncommitted cached data still exists, the command fails with the following error message: CMMVC6212E The command failed because data in the cache has not been committed to disk You will have to wait for this cached data to be committed to the underlying storage subsystem before you can remove the volume. The SVC will automatically destage uncommitted cached data two minutes after the last write activity for the volume. How much data there is to destage, and how busy the I/O subsystem is, determine how long this command takes to complete. You can check whether the volume has uncommitted data in the cache by using the svcinfo lsvdisk <VDISKNAME> command and checking the fast_write_state attribute. This attribute has the following meanings: empty not_empty corrupt No modified data exists in the cache. Modified data might exist in the cache. Modified data might have existed in the cache, but any modified data has been lost.
IBM_2145:ITSO-CLS2:admin>svctask rmvdisk IVD_Kanaga IBM_2145:ITSO-CLS2:admin>svctask rmvdisk IVD_Kanaga1 IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 29 AIX_MIG online unmanaged 10.0GB 0000000000000010 DS4500 600a0b8000174233000000b84876512f00000000000000000000000000000000 30 AIX_MIG1 online unmanaged 10.0GB 0000000000000011 DS4500 600a0b80001744310000010e4876444600000000000000000000000000000000 IBM_2145:ITSO-CLS2:admin>
338
5. Using Storage Manager (our storage subsystem management tool), unmap and unmask the disks from the SVC back to the AIX server. Important: This step is the last step that you can perform and still safely back out of everything you have done so far. Up to this point, you can reverse all of the actions that you have performed so far to get the server back online without data loss: Remap and remask the LUNs back to the SVC. Run the svctask detectmdisk command to rediscover the MDisks. Recreate the volumes with the svctask mkvdisk command. Remap the volumes back to the server with the svctask mkvdiskhostmap command. After you start the next step, you might not be able to turn back without the risk of data loss. We are ready to access the LUNs from the AIX server. If all of the zoning and LUN masking and mapping were done successfully, our AIX server will boot as though nothing has happened: 1. Run the cfgmgr -S command to discover the storage subsystem. 2. Use the lsdev -Ccdisk command to verify the discovery of the new disk. 3. Remove the references to all of the old disks. Example 6-73 shows the removal using SDD and Example 6-74 on page 340 shows the removal using SDDPCM.
Example 6-73 Remove references to old paths using SDD
#lsdev -Cc disk hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drive hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drive hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drive hdisk3 Available 1Z-08-02 1742-900 (900) Disk Array Device hdisk4 Available 1Z-08-02 1742-900 (900) Disk Array Device hdisk5 Defined 1Z-08-02 SAN volume Controller Device hdisk6 Defined 1Z-08-02 SAN volume Controller Device hdisk7 Defined 1D-08-02 SAN volume Controller Device hdisk8 Defined 1D-08-02 SAN volume Controller Device hdisk10 Defined 1Z-08-02 SAN volume Controller Device hdisk11 Defined 1Z-08-02 SAN volume Controller Device hdisk12 Defined 1D-08-02 SAN volume Controller Device hdisk13 Defined 1D-08-02 SAN volume Controller Device vpath0 Defined Data Path Optimizer Pseudo Device Driver vpath1 Defined Data Path Optimizer Pseudo Device Driver vpath2 Defined Data Path Optimizer Pseudo Device Driver # for i in 5 6 7 8 10 11 12 13; do rmdev -dl hdisk$i -R;done hdisk5 deleted hdisk6 deleted hdisk7 deleted hdisk8 deleted hdisk10 deleted hdisk11 deleted hdisk12 deleted hdisk13 deleted #for i in 0 1 2; do rmdev -dl vpath$i -R;done
Chapter 6. Data migration
339
deleted deleted deleted -Cc disk Available Available Available Available Available
16 Bit LVD SCSI Disk Drive 16 Bit LVD SCSI Disk Drive 16 Bit LVD SCSI Disk Drive 1742-900 (900) Disk Array Device 1742-900 (900) Disk Array Device
# lsdev -Cc disk hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI hdisk3 Defined 1D-08-02 MPIO FC 2145 hdisk4 Defined 1D-08-02 MPIO FC 2145 hdisk5 Available 1D-08-02 MPIO FC 2145 # for i in 3 4; do rmdev -dl hdisk$i -R;done hdisk3 deleted hdisk4 deleted # lsdev -Cc disk hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI hdisk5 Available 1D-08-02 MPIO FC 2145
4. If your application and data are on an LVM volume, rediscover the VG and then run the varyonvg VOLUME_GROUP command to activate the VG. 5. Mount your file systems with the mount /MOUNT_POINT command. 6. You are ready to start your application. Finally, to make sure that the MDisks are removed from the SVC, run the svctask detectmdisk command. The MDisks will first be discovered as offline and then they will automatically be removed after the SVC determines that there are no volumes associated with these MDisks.
340
3. Depending on your operating system, unmount the selected LUNs or shut down the host. 4. Add the SVC between your storage and the host. 5. Mount the LUNs or start the host again. 6. Start the migration. 7. After the migration process is complete, unmount the selected LUNs or shut down the host. 8. Remove the SVC from your SAN. 9. Mount the LUNs, or start the host again. 10.The migration is complete. As you can see, extremely little downtime is required. If you prepare everything correctly, you are able to reduce your downtime to a few minutes. The copy process is handled by the SVC, so the host does not hinder the performance while the migration progresses. To use the SVC for storage migrations, perform the steps that are described in the following sections: 6.5.2, Adding the SVC between the host system and the LSI 3500 on page 241 6.5.6, Migrating the volume from image mode to image mode on page 268 6.5.7, Removing image mode data from the SVC on page 278
341
As shown in Figure 6-97, a thin-provisioned volume has these components: Used capacity This term specifies the portion of real capacity that is being used to store data. For non-thin-provisioned copies, this value is the same as the volume capacity. If the volume
342
copy is thin-provisioned, the value increases from zero to the real capacity value as more of the volume is written to. Real capacity This capacity is the real allocated space in the storage pool. In a thin-provisioned volume, this value can differ from the total capacity. Free data This value specifies the difference between the real capacity and the used capacity values. The SVC is continuously trying to keep equal to the real capacity for contingency. If the free data capacity reaches the used capacity and if the volume has been configured with the -autoexpand option, the SVC will autoexpand the allocated space for this volume to keep this value equal to the real capacity. Grains This value is the smallest unit in into which the allocated space can be divided. Metadata This value is allocated in the real capacity, and it tracks the used capacity, real capacity, and free capacity.
IBM_2145:ITSO-CLS2:admin>svctask mkvdisk -mdiskgrp 0 -iogrp 0 -mdisk 0:1:2:3:4:5 -node 1 -vtype striped -size 15 -unit gb -fmtdisk -name VD_Full Virtual Disk, id [2], successfully created IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_Full id 2 name VD_Full IO_group_id 0 IO_group_name io_grp0 status offline mdisk_grp_id 0 mdisk_grp_name MDG_DS47 capacity 15.00GB type striped formatted yes . . vdisk_UID 60050768018401BF280000000000000B mdisk_grp_name MDG_DS47 used_capacity 15.00GB real_capacity 15.00GB free_capacity 0.00MB overallocation 100 2. We then add a thin-provisioned volume copy with the volume mirroring option by using the addvdiskcopy command and the autoexpand parameter, as shown in Example 6-76 on page 344.
343
IBM_2145:ITSO-CLS2:admin>svctask addvdiskcopy -mdiskgrp 1 -mdisk 6:7:8:9 -vtype striped -rsize 2% -autoexpand -grainsize 32 -unit gb VD_Full VDisk [2] copy [1] successfully created IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_Full id 2 name VD_Full IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id many mdisk_grp_name many capacity 15.00GB type many formatted yes mdisk_id many mdisk_name many vdisk_UID 60050768018401BF280000000000000B tsync_rate 50 copy_count 2 copy_id 0 sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS47 fused_capacity 15.00GB real_capacity 15.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize copy_id 1 status online sync no primary no mdisk_grp_id 1 mdisk_grp_name MDG_DS83 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 323.57MB free_capacity 323.17MB overallocation 4746 autoexpand on warning 80 grainsize 32 As you can see in Example 6-76, the VD_Full has a copy_id 1 where the used_capacity is 0.41 MB, which is equal to the metadata, because only zeros exist in the disk.
344
The real_capacity is 323.57 MB, which is equal to the -rsize 2% value that is specified in the addvdiskcopy command. The free capacity is 323.17 MB, which is equal to the real capacity minus the used capacity. If zeros are written on the disk, the thin-provisioned volume does not consume space. Example 6-77 shows that the thin-provisioned volume does not consume space even when they are in sync.
Example 6-77 Thin-provisioned volume display
IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisksyncprogress 2 vdisk_id vdisk_name copy_id progress estimated_completion_time 2 VD_Full 0 100 2 VD_Full 1 100 IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_Full id 2 name VD_Full IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id many mdisk_grp_name many capacity 15.00GB type many formatted yes mdisk_id many mdisk_name many vdisk_UID 60050768018401BF280000000000000B throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 2 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS47 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 15.00GB real_capacity 15.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize copy_id 1 status online
345
sync yes primary no mdisk_grp_id 1 mdisk_grp_name MDG_DS83 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 323.57MB free_capacity 323.17MB overallocation 4746 autoexpand on warning 80 grainsize 32 3. We can split the volume mirror or remove one of the copies, keeping the thin-provisioned copy as our valid copy by using the splitvdiskcopy command or the rmvdiskcopy command: If you need your copy as a thin-provisioned clone, we suggest that you use the splitvdiskcopy command because that command will generate a new volume and you will be able to map to any server that you want. If you need your copy because you are migrating from a previously fully allocated volume to go to a thin-provisioned volume without any effect on the server operations, we suggest that you use the rmvdiskcopy command. In this case, the original volume name is kept and it remains mapped to the same server. Example 6-78 shows the splitvdiskcopy command.
Example 6-78 splitvdiskcopy command
IBM_2145:ITSO-CLS2:admin>svctask splitvdiskcopy -copy 1 -name VD_SEV VD_Full Virtual Disk, id [7], successfully created IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk -filtervalue name=VD* id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state 2 VD_Full 0 io_grp0 online 0 MDG_DS47 15.00GB striped 60050768018401BF280000000000000B 0 1 empty 7 VD_SEV 0 io_grp0 online 1 MDG_DS83 15.00GB striped 60050768018401BF280000000000000D 0 1 empty IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_SEV id 7 name VD_SEV IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 1 mdisk_grp_name MDG_DS83 capacity 15.00GB type striped formatted no
346
vdisk_UID 60050768018401BF280000000000000D throttling 0 preferred_node_id 2 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 1 mdisk_grp_name MDG_DS83 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 323.57MB free_capacity 323.17MB overallocation 4746 autoexpand on warning 80 grainsize 32 Example 6-79 shows the rmvdiskcopy command.
Example 6-79 rmvdiskcopy command
IBM_2145:ITSO-CLS2:admin>svctask rmvdiskcopy -copy 0 VD_Full IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk -filtervalue name=VD* id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state 2 VD_Full 0 io_grp0 online 1 MDG_DS83 15.00GB striped 60050768018401BF280000000000000B 0 1 empty IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk 2 id 2 name VD_Full IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 1 mdisk_grp_name MDG_DS83 capacity 15.00GB type striped formatted no vdisk_UID 60050768018401BF280000000000000B throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite
Chapter 6. Data migration
347
udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 1 status online sync yes primary yes mdisk_grp_id 1 mdisk_grp_name MDG_DS83 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 323.57MB free_capacity 323.17MB overallocation 4746 autoexpand on warning 80 grainsize 32
348
Chapter 7.
Easy Tier
In this chapter we describe the function provided by the EasyTier disk performance optimization feature of the SAN Volume Controller. We also explain how to activate the EasyTier process for both evaluation purposes and for automatic extent migration.
349
350
MDisks that are used in a single tier storage pool should have the same hardware characteristics, for example, the same RAID type, RAID array size, disk type, and disk revolutions per minute (RPMs) and controller performance characteristics.
351
Figure 7-2 shows a scenario in which a storage pool is populated with two different MDisk types: one belonging to an SSD array, and one belonging to an HDD array. Although this example shows RAID5 arrays, other RAID types can be used.
Adding SSD to the pool means additional space is also now available for new volumes, or volume expansion.
352
The flow between this processes is as follows: I/O Monitoring This process operates continuously and monitors volumes for host I/O activity. It collects performance statistics for each extent and derives a rolling average for the I/O activity. Easy Tier makes allowances for large block I/Os and thus only considers I/Os of up to 64 KB as migration candidates. This is an efficient process and adds negligible processing overheads to the SVC nodes. Data Placement Advisor The Data Placement Advisor uses workload statistics to make a cost benefit decision as to which extents are to be candidates for migration to a higher performance (SSD) tier. This process also identifies extents that need to be migrated back to a lower (HDD) tier. Data Migration Planner Using the extents previously identified, the Data Migration Planner step builds the extent migration plan for the storage pool. Data Migrator The Data Migrator step involves scheduling and the actual movement or migration of the volumes extents up to, or down from the high disk tier. The extent migration rate is capped so that a maximum of up to 15MBps is migrated. This equates to around 2TB a day that will be migrated between disk tiers. When relocating volume extents, Easy Tier performs these actions: It attempts to migrate the most active volume extents up to SSD first. To ensure there is a free extent available, a less frequently accessed extent may first need to be migrated back to HDD. A previous migration plan and any queued extents that are not yet relocated are abandoned.
353
pool. This is typically done for a single tier pool containing only HDDs, so that the benefits of adding SSDs to the pool can be evaluated prior to any major hardware acquisition. A statistics summary file is created in the /dumps directory of the SVC nodes named dpa_heat.nodeid.yymmdd.hhmmss.data. This file can be offloaded from the SVC nodes with PSCP -load or using the GUI as shown in 7.4.1, Measuring by using the Storage Advisor Tool on page 357. A web browser is used to view the report created by the tool.
Off Off Off Off Auto (see note 5) Auto (see note 5) Auto (see note 5) Auto (see note 5) On On On On
Inactive (see note 2) Inactive (see note 2) Inactive (see note 2) Inactive (see note 2) Inactive (see note 2) Inactive (see note 2) Measured (see note 3) Active (see note 1) Measured (see note 3) Measured (see note 3) Measured (see note 3) Active (see note 1)
Notes:
354
1. If the volume copy is in image or sequential mode or is being migrated then the volume copy Easy Tier status will be measured instead of active. 2. When the volume copy status is inactive, no Easy Tier functions are enabled for that volume copy. 3. When the volume copy status is measured, the Easy Tier function collects usage statistics for the volume but automatic data placement is not active. 4. When the volume copy status is active, the Easy Tier function operates in automatic data placement mode for that volume. 5. The default Easy Tier setting for a storage pool is auto, and the default Easy Tier setting for a volume copy is on. This means that Easy Tier functions will be disabled for storage pools with a single tier, and that automatic data placement mode will be enabled for all striped volume copies in a storage pool with two tiers. Examples of the use of these parameters are shown in 7.6, Using Easy Tier with the SVC CLI on page 365 and 7.7, Using Easy Tier with the SVC GUI on page 369.
7.3.1 Prerequisites
There is no Easy Tier license required for the SVC; it comes as a standard feature. For Easy Tier to migrate extents you will need to have disk storage available that has different tiers, for example a mix of SSD and HDD.
355
When a volume is migrated out of a storage pool that is managed with Easy Tier, then Easy Tier automatic data placement mode is no longer active on that volume. Automatic data placement is also turned off while a volume is being migrated even if it is between pools that both have Easy Tier automatic data placement enabled. Automatic data placement for the volume is re-enabled when the migration is complete.
7.3.3 Limitations
Limitations exist when using IBM System Storage Easy Tier on the SAN Volume Controller. Limitations when removing an MDisk by using the -force parameter When an MDisk is deleted from a storage pool with the -force parameter, extents in use are migrated to MDisks in the same tier as the MDisk being removed, if possible. If insufficient extents exist in that tier, then extents from the other tier are used. Limitations when migrating extents When Easy Tier automatic data placement is enabled for a volume, the svctask migrateexts command-line interface (CLI) command cannot be used on that volume. Limitations when migrating a volume to another storage pool When the SAN Volume Controller migrates a volume to a new storage pool, Easy Tier automatic data placement between the two tiers is temporarily suspended. After the volume is migrated to its new storage pool, Easy Tier automatic data placement between the generic SSD tier and the generic HDD tier resumes for the moved volume, if appropriate. When the SAN Volume Controller migrates a volume from one storage pool to another, it will attempt to migrate each extent to an extent in the new storage pool from the same tier as the original extent. In several cases, such as a target tier being unavailable, the other tier is used. For example, the generic SSD tier might be unavailable in the new storage pool. Limitations when migrating a volume to image mode Easy Tier automatic data placement does not support image mode. When a volume with Easy Tier automatic data placement mode active is migrated to image mode, Easy Tier automatic data placement mode is no longer active on that volume. Image mode and sequential volumes cannot be candidates for automatic data placement. Easy Tier does support evaluation mode for image mode volumes.
Best practices
Always set the Storage Pool -easytier value to on rather than to the default value auto. This makes it easier to turn on evaluation mode for existing single tier pools, and no further changes will be needed when you move to multitier pools. See Easy Tier activation on page 354 for more information about the mix of pool and volume settings. Using Easy Tier can make it more appropriate to use smaller storage pool extent sizes.
356
Offloading statistics
To extract the summary performance data, use one of these methods:
357
The distribution of hot data and cold data for each volume is shown in the volume heat distribution report. The report displays the portion of the capacity of each volume on SSD (red), and HDD (blue), as shown in Figure 7-6.
358
Click on Configure Storage and the Configure Internal Storage window will appear (Figure 7-8).
359
This window shows the four available drives. The next step is to select the configuration preset from the three options available. In the following sections we will show each of this configurations and the difference between each one.
Next the storage pool configuration has to be selected. For this example we will select to expand an existing storage pool: ssd_pool, as shown in Figure 7-10 on page 361.
360
Finally, click Finish to apply the configuration. The resulting output is shown (Figure 7-11).
From the Pools Mdisks by Pools menu you can see the newly configured MDisk, mdisk12 in our example, added to the storage pool we selected previously, ssd_pool. Right clicking on the MDisk shows the RAID level configured for the selected preset (Figure 7-12).
361
Next we again select to expand the existing ssd_pool Storage Pool. We will skip the Storage Pool selection window and jump straight to the resulting output shown in Figure 7-14.
Finally, from the Pools Mdisks by Pools menu the two newly created MDisks (mdisk10 and mdisk13) will be displayed and, as shown in Figure 7-15 on page 363, each one is configured as a RAID-1 array.
362
From the Member Drives tab you will see that each MDisk is configured with drives that belong to different nodes from the same I/O group, as shown in Figure 7-16, providing node redundancy. Refer to Figure 7-7 on page 359 to see which node owns which drives.
7.5.3 Striped
Finally, we show the Striped configuration preset (Figure 7-17 on page 364).This preset will create RAID-0 arrays with drives from the same node providing no redundancy in case of a node failure. Since in our configuration we have the SSDs spread across two SVC nodes, two different MDisks will be created.
363
We skip the Storage Pool selection window and show the resulting output in Figure 7-18.
From the Pools Mdisks by Pools menu the two newly created mdisks (mdisk10 and mdisk12) will be displayed. Each mdisk is a RAID-0 array as shown in Figure 7-19 on page 364.
364
365
IBM_2145:ITSO_SVC3:superuser>lsmdiskgrp Single_Tier_Pool id 1 name Single_Tier_Pool status online mdisk_count 3 vdisk_count 3 ... easy_tier off easy_tier_status inactive ... tier generic_ssd tier_mdisk_count 0 tier_capacity 0.00MB ... tier generic_hdd tier_mdisk_count 3 tier_capacity 383.00GB IBM_2145:ITSO_SVC3:superuser>chmdiskgrp -easytier on Single_Tier_Pool IBM_2145:ITSO_SVC3:superuser>lsmdiskgrp Single_Tier_Pool id 1 name Single_Tier_Pool status online mdisk_count 3 vdisk_count 3 ... easy_tier on easy_tier_status active tier generic_ssd tier_mdisk_count 0 tier_capacity 0.00MB ... tier generic_hdd tier_mdisk_count 3 tier_capacity 383.00GB
------------ Now Reapeat for the Volume ------------IBM_2145:ITSO_SVC3:superuser>lsvdisk -filtervalue "mdisk_grp_name=Single*" id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type 2 vdisk0 0 io_grp0 online 1 Single_Tier_Pool 10.00GB striped IBM_2145:ITSO_SVC3:superuser>lsvdisk vdisk0 id 2 name vdisk0 ... easy_tier off easy_tier_status inactive tier generic_ssd tier_capacity 0.00MB tier generic_hdd tier_capacity 10.00GB ... IBM_2145:ITSO_SVC3:superuser>chvdisk -easytier on vdisk0 IBM_2145:ITSO_SVC3:superuser>lsvdisk vdisk0
366
id 2 name vdisk0 ... easy_tier on easy_tier_status measured tier generic_ssd tier_capacity 0.00MB tier generic_hdd tier_capacity 10.00GB
IBM_2145:ITSOSVC3:superuser>lsmdisk ssd_mdisk0 id 6 name ssd_mdisk0 status online mode managed mdisk_grp_id mdisk_grp_name Multi_Tier_Pool capacity 128.0GB
367
368
name vdisk0 ... mdisk_grp_name Multi_Tier_Pool capacity 10.00GB type striped ... easy_tier on easy_tier_status active . tier generic_ssd tier_capacity 0.00MB tier generic_hdd tier_capacity 10.00GB
The volume in the example will be measured by Easy Tier and a hot extent migration will be performed from the hdd tier MDisk to the ssd tier MDisk. Also note that the volume hdd tier generic_hdd still holds the entire capacity of the volume because the generic_ssd capacity value is 0.00 MB. The allocated capacity on the generic_hdd tier will gradually change as Easy Tier optimizes the performance by moving extents into the generic_ssd tier.
As you can now see we have two different tiers available in our SVC cluster, generic_ssd and generic_hdd. At this time there are also extents being used on both the generic_ssd tier and the generic_hdd tier; see the free_capacity values. However, we do not know from this command if the SSD storage is being used by the Easy Tier process. To determine if Easy Tier is actively measuring or migrating extents within the cluster, you need to view the volume status as shown previously in Example 7-5 on page 368.
369
Our environment is an SVC cluster with the following resources available: 1 x I/O group with two 2145-8G4 nodes 1 x external Storage Subsystem with SSDs 1 x external Storage Subsystem with HDDs
This is because, by default, all MDisks are initially discovered as Hard Disk Drives (HDDs); see the MDisk properties panel Figure 7-22 on page 371.
370
Therefore, for Easy Tier to take effect, you need to change the disk tier. Right-click the selected MDisk and choose Select Tier, as shown in Figure 7-23.
Now set the MDisk Tier to Solid-State Drive, as shown in Figure 7-24 on page 371.
371
Click Close after reviewing that the Task completed successfully. The MDisk now has the correct tier and so the properties value is correct for a multidisk tier pool, as shown in Figure 7-24.
372
Chapter 8.
373
8.1 FlashCopy
The FlashCopy function of the IBM System Storage SAN Volume Controller (SVC) provides the capability to perform a point-in-time copy of one or more volumes. In this section we describe the inner workings of FlashCopy and provide details of its configuration and use. You can use FlashCopy to help you solve critical and challenging business needs that require duplication of data of your source volume. Volumes may remain online and active while you create consistent copies of the data sets. Because the copy is performed at the block level, it operates below the host operating system and cache and is therefore transparent to the host. Note: Because FlashCopy operates at the block level, below the host operating system and cache, those levels do need to be flushed for consistent FlashCopies. While the FlashCopy operation is performed, the source volume is frozen briefly to initialize the FlashCopy bitmap and then I/O is allowed to resume. Although several FlashCopy options require the data to be copied from the source to the target in the background, which can take a length of time to complete, the resulting data on the target volume is presented so that the copy appears to have completed immediately. This is done through the use of a bitmap (or bit array) which keeps track of changes to the data after the FlashCopy is initiated and an indirection layer, which allows data to be read from the source volume transparently.
Usually when FlashCopy is used for backup purposes, the target data is managed as read-only at the operating system level. This provides extra security by ensuring that your target data has not been modified and remains true to the source.
375
376
Bitmaps governing I/O redirection (I/O indirection layer) are maintained in both nodes of the SVC I/O Group to prevent a single point of failure. FlashCopy mapping and Consistency Groups can be automatically withdrawn after the completion of the background copy. Thin-provisioned FlashCopy will only consume disk space when updates are made to the source or target data and not for the entire capacity of a volume copy. FlashCopy licensing is based on the virtual capacity of the source volumes. Incremental FlashCopy copies all of the data for the first FlashCopy and then only the changes for all subsequent FlashCopy. Incremental FlashCopy can substantially reduce the time required to recreate an independent image. Reverse FlashCopy enables FlashCopy targets to become restore points for the source without breaking the FlashCopy relationship and without having to wait for the original copy operation to complete. The maximum number of supported FlashCopy mappings is 8192 per SVC cluster. The size of the source and target volumes cannot be altered (increased or decreased) while a FlashCopy mapping is defined.
377
Note that regardless of whether the initial FlashCopy map (volume X volume Y) is incremental, the Reverse FlashCopy operation only copies the modified data. Consistency Groups are reversed by creating a set of new reverse FlashCopy maps and adding them to a new reverse Consistency Group. Consistency Groups cannot contain more than one FlashCopy map with the same target volume.
378
Figure 8-2 Tivoli Storage Manager for Advanced Copy Services features
Tivoli FlashCopy Manager provides many of the features of Tivoli Storage Manager for Advanced Copy Services without the requirement to use Tivoli Storage Manager. With Tivoli FlashCopy Manager, you can coordinate and automate host preparation steps before issuing FlashCopy start commands to ensure that a consistent backup of the application is made. You can put databases into hot backup mode and flush filesystem cache prior to starting the FlashCopy. FlashCopy Manager also allows for easier management of on-disk backups using FlashCopy, and provides a simple interface to perform the reverse operation. Figure 8-3 on page 380 shows the FlashCopy Manager feature.
379
With IBM Tivoli FlashCopy Manager V3.1, released October 21, 2011 support for VMware vSphere was the major addition, as this leverages IBM FlashCopy to provide extremely quick and efficient backups of VMware environments. This release also integrates with IBM Tivoli Storage Manager for Virtual Environments, and which allows backup of point-in-time images into the Tivoli Storage Manager Infrastructure for long-term storage. The addition of VMWare vSphere brings support and application awareness for FlashCopy Manager up to the following list: 1. MMC Snapin and Base System Services for Microsoft Windows. 2. Microsoft Exchange 2007 & 2010. 3. VSS Requestor for Microsoft Windows. 4. IBM DB2 (with or without SAP) for AIX, Solaris SPARC, HP-UX (IA-64), and Linux x86_64. 5. Oracle for AIX, Solaris SPARC, HP-UX (IA-64), and Linux x86_64. 6. Oracle with SAP for AIX, Solaris SPARC, HP-UX (IA-64), and Linux x86_64. 7. Generic Backup Agent support for Custom Applications on AIX, Solaris SPARC, HP-UX (IA-64), and Linux x86_64. 8. VMware vSphere on Linux x86_64. If you would like to learn more about IBM Tivoli FlashCopy Manager, visit the following link, as describing IBM Tivoli FlashCopy Manager in detail is beyond the scope of this document: http://www-01.ibm.com/software/tivoli/products/storage-flashcopy-mgr/
380
381
382
Figure 8-6 shows four targets and mappings taken from a single source, along with their interdependencies. In this example Target 1 is the oldest (as measured from the time it was started) through to Target 4, which is the newest. The ordering is important because of the way in which data is copied when multiple target volumes are defined and because of the dependency chain that results. A write to the source volume does not cause its data to be copied to all of the targets. Instead, it is copied to the newest target volume only (Target 4 in Figure 8-6). The older targets will refer to new targets first before referring to the source. From the point of view of an intermediate target disk (neither the oldest or the newest), it treats the set of newer target volumes and the true source volume as a type of composite source. It treats all older volumes as a kind of target (and behaves like a source to them). If the mapping for an intermediate target volume shows 100% progress, its target volume contains a complete set of data. In this case, mappings treat the set of newer target volumes, up to and including the 100% progress target, as a form of composite source. A dependency relationship exists between a particular target and all newer targets (up to and including a target that shows 100% progress) that share the same source until all data has been copied to this target and all older targets. You can read more about Multiple Target FlashCopy in 8.4.6, Interaction and dependency between Multiple Target FlashCopy mappings on page 387.
383
Note: After an individual FlashCopy mapping has been added to a Consistency Group, it can only be managed as part of the group. Operations such as prepare, start, and stop are no longer allowed on the individual mapping.
Dependent writes
To illustrate why it is crucial to use Consistency Groups when a data set spans multiple volumes, consider the following typical sequence of writes for a database update transaction: 1. A write is executed to update the database log, indicating that a database update is about to be performed. 2. A second write is executed to perform the actual update to the database. 3. A third write is executed to update the database log, indicating that the database update has completed successfully. The database ensures the correct ordering of these writes by waiting for each step to complete before starting the next step. However, if the database log (updates 1 and 3) and the database itself (update 2) are on separate volumes, then it is possible for the FlashCopy of the database volume to occur prior to the FlashCopy of the database log. This can result in the target volumes seeing writes (1) and (3) but not (2), because the FlashCopy of the database volume occurred before the write was completed. In this case, if the database was restarted using the backup that was made from the FlashCopy target volumes, the database log indicates that the transaction had completed successfully when in fact it had not, because the FlashCopy of the volume with the database file was started (bitmap was created) before the write had completed to the volume. Therefore, the transaction is lost and the integrity of the database is in question. To overcome the issue of dependent writes across volumes and to create a consistent image of the client data, it is necessary to perform a FlashCopy operation on multiple volumes as an atomic operation. To accomplish this the SVC supports the concept of Consistency Groups. A FlashCopy Consistency Group can contain up to 512 FlashCopy mappings (this is the maximum number of FlashCopy mappings supported by the SVC cluster). FlashCopy
384
commands can then be issued to the FlashCopy Consistency Group and thereby simultaneously for all of the FlashCopy mappings that are defined in the Consistency Group. For example, when issuing a FlashCopy start command to the Consistency Group, all of the FlashCopy mappings in the Consistency Group are started at the same time, resulting in a point-in-time copy that is consistent across all of the FlashCopy mappings that are contained in the Consistency Group.
Maximum configurations
Table 8-1 lists the FlashCopy properties and maximum configurations.
Table 8-1 FlashCopy properties and maximum configuration FlashCopy property FlashCopy targets per source Maximum 256 Comment This maximum is the maximum number of FlashCopy mappings that can exist with the same source volume. The number of mappings is no longer limited by the number of volumes in the cluster, so the FlashCopy component limit applies. This maximum is an arbitrary limit that is policed by the software. This maximum is a limit on the quantity of FlashCopy mappings using bitmap space from this I/O Group. This maximum configuration will consume all 512 MB of bitmap space for the I/O Group and allow no Metro and Global Mirror bitmap space. The default is 40 TB. This limit is due to the time that is taken to prepare a Consistency Group with a large number of mappings.
4,096
FlashCopy Consistency Groups per cluster FlashCopy volume space per I/O Group
127 1,024TB
512
385
When a FlashCopy mapping is prepared and started, the following sequence is applied: 1. Flush write cache to the source volume or volumes that are part of a Consistency Group. 2. Put cache into write-through mode on the source volumes. 3. Discard cache for the target volumes. 4. Establish a sync point on all of the source volumes in the Consistency Group (creating the FlashCopy bitmap). 5. Ensure that the indirection layer governs all of the I/O to the source volumes and target volumes. 6. Enable cache on both the source volumes and target volumes. FlashCopy provides the semantics of a point-in-time copy using the indirection layer, which intercepts I/O directed at either the source or target volumes. The act of starting a FlashCopy mapping causes this indirection layer to become active in the I/O path, which occurs automatically across all FlashCopy mappings in the Consistency Group. The indirection layer then determines how each I/O is to be routed based on the following factors: The volume and the logical block address (LBA) to which the I/O is addressed Its direction (read or write) The state of an internal data structure, the FlashCopy bitmap The indirection layer allows the I/O to go through to the underlying volume; redirects the I/O from the target volume to the source volume; or queues the I/O while it arranges for data to be copied from the source volume to the target volume. To explain in more detail which action is applied for each I/O, we first look at the FlashCopy bitmap.
Source reads
Reads are performed from the source volume. This is the same as for non-FlashCopy volumes.
Source writes
Writes to the source will cause the grain to be copied to the target if it has not already been copied, the bitmap updated, then the write will be performed to the source.
Target reads
Reads are performed from the target if the grain has already been copied. Otherwise, the read is performed from the source and no copy is performed.
386
Target writes
Writes to the target will cause the grain to be copied from the source to the target unless the entire grain is being update on the target. In this case the target will be marked as split with the source (if there is no I/O error during the write) and the write will go directly to the target.
387
Target 0 is not dependent on a source, because it has completed copying. Target 0 has two dependent mappings (Target 1 and Target 2). Target 1 is dependent upon Target 0. It will remain dependent until all of Target 1 has been copied. Target 2 is dependent on it, because Target 2 is 20% copy complete. After all of Target 1 has been copied, it can then move to the idle_copied state. Target 2 is dependent upon Target 0 and Target 1 and will remain dependent until all of Target 2 has been copied. No target is dependent on Target 2, so when all of the data has been copied to Target 2, it can move to the Idle_copied state. Target 3 has actually completed copying, so it is not dependent on any other maps.
388
Yes Target No
Read from source volume. If any newer targets exist for this source in which this grain has already been copied, read from the oldest of these targets. Otherwise, read from the source.
Yes
389
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk Image_volume_A id 8 name Image_volume_A IO_group_id 0 IO_group_name io_grp0 status online storage_pool_id 2 storage_pool_name Storage_Pool_Image capacity 36.0GB type image . . . autoexpand warning grainsize IBM_2145:ITSO-CLS1:admin>svctask mkvolume -size 36 -unit gb -name volume_A_copy -mdiskgrp Storage_Pool_DS47 -vtype striped -iogrp 1 Virtual Disk, id [19], successfully created
390
Tip: Alternatively, you can use the expandvolumesize and shrinkvolumesize volume commands to modify the size of the volume. See 9.5.10, Expanding a volume on page 500 and 9.5.16, Shrinking a volume on page 505 for more information. But remember these actions must be performed before a mapping is created. You can use an image mode volume as either a FlashCopy source volume or target volume.
391
Description The prestartfcmap or prestartfcconsistgrp command is directed to either a Consistency Group for FlashCopy mappings that are members of a normal Consistency Group or to the mapping name for FlashCopy mappings that are stand-alone mappings. The prestartfcmap or prestartfcconsistgrp command places the FlashCopy mapping into the Preparing state. Important: The prestartfcmap or prestartfcconsistgrp command can corrupt any data that previously resided on the target volume because cached writes are discarded. Even if the FlashCopy mapping is never started, the data from the target might have logically changed during the act of preparing to start the FlashCopy mapping. The FlashCopy mapping automatically moves from the Preparing state to the Prepared state after all cached data for the source is flushed and all cached data for the target is no longer valid. When all of the FlashCopy mappings in a Consistency Group are in the Prepared state, the FlashCopy mappings can be started. To preserve the cross-volume Consistency Group, the start of all of the FlashCopy mappings in the Consistency Group must be synchronized correctly with respect to I/Os that are directed at the volumes by using the startfcmap or startfcconsistgrp command. The following actions occur during the startfcmap or startfcconsistgrp commands run: New reads and writes to all source volumes in the Consistency Group are paused in the cache layer until all ongoing reads and writes beneath the cache layer are completed. After all FlashCopy mappings in the Consistency Group are paused, the internal cluster state is set to allow FlashCopy operations. After the cluster state is set for all FlashCopy mappings in the Consistency Group, read and write operations continue on the source volumes. The target volumes are brought online. As part of the startfcmap or startfcconsistgrp command, read and write caching is enabled for both the source and target volumes. You can modify the following FlashCopy mapping properties: FlashCopy mapping name Clean rate Consistency Group Copy rate (for background copy) Automatic deletion of the mapping when the background copy is complete There are two separate mechanisms by which a FlashCopy mapping can be stopped: You have issued a command. An I/O error has occurred. This command requests that the specified FlashCopy mapping be deleted. If the FlashCopy mapping is in the Stopped state, the force flag must be used. If the flush of data from the cache cannot be completed, the FlashCopy mapping enters the Stopped state.
Flush done
Start
Modify
Stop
Delete
Flush failed
392
Description After all of the source data has been copied to the target and there are no dependent mappings, the state is set to Copied. If the option to automatically delete the mapping after the background copy completes is specified, the FlashCopy mapping is automatically deleted. If this option is not specified, the FlashCopy mapping is not automatically deleted and can be reactivated by preparing and starting again. The node has failed.
Bitmap online/offline
Idle_or_copied
Read and write caching is enabled for both the source and the target. A FlashCopy mapping exists between the source and target but the source and target behave as independent volumes in this state.
Copying
The FlashCopy indirection layer governs all I/O to the source and target volumes while the background copy is running. The background copy process is copying grains from the source to the target. Reads and writes are executed on the target as though the contents of the source were instantaneously copied to the target during the startfcmap or startfcconsistgrp command. The source and target can be independently updated. Internally, the target depends on the source for certain tracks. Read and write caching is enabled on the source and the target.
Stopped
The FlashCopy was stopped either by a user command or by an I/O error. When a FlashCopy mapping is stopped, the integrity of the data on the target volume is lost. Therefore, while the FlashCopy mapping is in this state, the target volume is in the Offline state. To regain access to the target, the mapping must be started again (the previous point-in-time will be lost) or the FlashCopy mapping must be deleted. The source volume is accessible, and read/write caching is enabled for the source. In the Stopped state, a mapping can either be prepared again or deleted.
Stopping
The mapping is in the process of transferring data to a depend mapping. The behavior of the target volume depends on whether the background copy process had completed while the mapping was in the Copying state. If the copy process had completed, the target volume remains online while the stopping copy process completes. If the copy process had not completed, data in the cache is discarded for the target volume. The target volume is taken offline, and the stopping copy process runs. After the data has been copied, a stop complete asynchronous event notification is issued. The mapping will move to the Idle/Copied state if the background copy has completed or to the Stopped state if the background copy has not completed. The source volume remains accessible for I/O.
393
Suspended
The FlashCopy was in the Copying or Stopping state when access to the metadata was lost. As a result both the source and target volumes are offline and the background copy process has been halted. When the metadata becomes available again, the FlashCopy mapping will return to the Copying or Stopping state. Access to the source and target volumes will be restored, and the background copy or stopping process will resume. Unflushed data that was written to the source or target before the FlashCopy was suspended is pinned in cache until the FlashCopy mapping leaves the Suspended state.
Preparing
The FlashCopy is in the process of preparing the mapping. While in this state, data from cache is destaged to disk and a consistent copy of the source exists on disk. At this time cache is operating in write-through mode and therefore writes to the source volume will experience additional latency. The target volume is reported as online, but will not perform reads or writes. These reads and writes are failed by the SCSI front-end. Before starting the FlashCopy mapping, it is important that any cache at the host level, for example, buffers on the host operating system or application, are also instructed to flush any outstanding writes to the source volume. Performing the cache flush required as part of the startfcmap or startfcconsistgrp command causes I/Os to be delayed waiting for the cache flush to complete. To overcome this problem, SVC FlashCopy supports the prestartfcmap or prestartfcconsistgrp commands, which prepare for a FlashCopy start while still allowing I/Os to continue to the source volume. In the Preparing state, the FlashCopy mapping is prepared by the following steps: 1. Flushing any modified write data associated with the source volume from the cache. Read data for the source will be left in the cache. 2. Placing the cache for the source volume into write-through mode, so that subsequent writes wait until data has been written to disk before completing the write command that is received from the host. 3. Discarding any read or write data that is associated with the target volume from the cache.
Prepared
When in the Prepared state, the FlashCopy mapping is ready to perform a start. While the FlashCopy mapping is in this state, the target volume is in the Offline state. In the Prepared state, writes to the source volume experience additional latency because the cache is operating in write-through mode.
394
Table 8-4 FlashCopy mapping state summary State Online/Offline Idling/Copied Copying Stopped Stopping Online Online Online Online Source Cache state Write-back Write-back Write-back Write-back Online/Offline Online Online Offline Online if copy complete Offline if copy not complete Offline Online but not accessible Online but not accessible Target Cache state Write-back Write-back N/A N/A
395
A fully allocated source volume can be incrementally copied using FlashCopy to another fully allocated volume at the same time as being copied to multiple thin-provisioned targets (taken at separate points in time). This combination allows a single full backup to be kept for recovery purposes and separates the backup workload from the production workload, and at the same time, allowing older thin-provisioned backups to be retained.
The grains per second numbers represent the maximum number of grains that the SVC will copy per second, assuming that the bandwidth to the managed disks (MDisks) can accommodate this rate. If the SVC is unable to achieve these copy rates because of insufficient bandwidth from the SVC nodes to the MDisks, then background copy I/O contends for resources on an equal basis with the I/O that is arriving from the hosts. Both background copy I/O and I/O that is arriving from the hosts tend to see an increase in latency and a consequential reduction in throughput. Both background copy and foreground I/O continue to make forward progress, and do not stop, hang, or cause the node to fail. The background copy is performed by both nodes of the I/O Group in which the source volume resides. 396
IBM System Storage SAN Volume Controller V6.3
8.4.14 Synthesis
The FlashCopy functionality in SVC simply creates copy volumes. All of the data in the source volume is copied to the destination volume, including operating system, logical volume manager, and application metadata. Note: Certain operating systems are unable to use FlashCopy without an additional step, which is termed synthesis. In summary, synthesis performs a type of transformation on the operating system metadata on the target volume so that the operating system can use the disk.
Node failure
Normally, two copies of the FlashCopy bitmaps are maintained. One copy of the FlashCopy bitmaps is on each of the two nodes making up the I/O Group of the source volume. When a node fails, one copy of the bitmaps, for all FlashCopy mappings whose source volume is a member of the failing nodes I/O Group, will become inaccessible. FlashCopy will continue with a single copy of the FlashCopy bitmap being stored as non-volatile in the remaining node in the source I/O Group. The cluster metadata is updated to indicate that the missing node no longer holds a current bitmap. When the failing node recovers, or a replacement node is added to the I/O Group, the bitmap redundancy will be restored.
397
398
FlashCopy Target
This is a support combination. It has several restrictions: 1) issuing a stop -force may cause the Remote Copy relationship to need to be full re-synced. 2)Code level must be 6.2.x or higher. 3)IO Group must be the same.
Snapshot
Options: If Auto-Create Target Thin-provisioned target with rsize = 0 Autoexpand=on Target pool is primary copy source pool No background copy Use case The user wants to produce a copy of a volume without impacting the availability of the volume. The user does not anticipate a large number of changes to be made to the source or target volume; a significant proportion of the volumes will not be changed. By ensuring that only changes require a copy of data to be made, the total amount of disk space required for the
399
copy is significantly reduced, and so allows for many such snapshot copies to be used in the environment. Snapshots are therefore useful for providing protection against corruption or similar issues with the validity of the data, but do not provide protection from physical controller failures. Snapshots can also provide a vehicle for performing repeatable testing including what-if modeling based on production data without requiring a full copy of the data to be provisioned.
Clone
Options: If auto-create target Created volume identical to primary copy of source volume (including storage pool) Auto-Delete Clean Rate = 0 Background Copy Rate = 50 Use case Users want a copy of the volume that they can modify without impacting the original. After the clone is established, there is no expectation that it will be refreshed or that there will be any further need to reference the original production data again. If the source is thin-provisioned, then the target will be thin-provisioned for auto-create target.
Backup
Options: If auto-create target Created volume identical to primary copy of source volume Incremental Clean Rate = 0 Background Copy Rate = 50 Use case The user wants to create a copy of the volume that can be used as a backup in the event that the source becomes unavailable, as in the case of the loss of the underlying physical controller. The user plans to periodically update the secondary copy and does not want to suffer the overhead of creating a completely new copy each time (and incremental FlashCopy times are faster than full copy, which helps to reduce the window where the new backup is not yet fully effective). If the source is thin-provisioned, then the target will be thin-provisioned on this one for auto-create target. Another use case here, which is not supported by the name, is to create and maintain (periodically refresh) an independent image that can be subjected to intensive I/O (for example, data mining) without impacting source volume performance.
400
source or using the migrate to This feature does not have the ability to control back-end storage mirror or replication. With this feature, host IO completes when both copies are written. Prior to 6.3.0 this feature would take a copy offline when it had a IO time-out, and then resynchronize with the online copy once it recovered. With 6.3.0, this feature has been enhanced with a tunable latency tolerance. This is designed to provide an option to give preference to loosing the redundancy between the two copies. This tunable time-out value is either Latency or Redundancy. The Latency tuning option (set with svctask chvdisk -mirrowritepriority latency) is the the default and was the behavior found in releases prior to 6.3.0 and prioritizes host I/O latency. This yields a preference to host IO over availability. However, you may have a need in your environment to give preference to Redundancy when availability is more important than IO response time. This is done with svctask chvdisk -mirrowritepriority redundancy. Regardless of which option you choose, Volume Mirroring can provide extra protection for your environment. With regard to migration there are several options available: Export to Image mode: This allows you to move storage from managed mode to image mode, useful if you are using the SVC as a migration device. For example, vendor As product cannot communicate with vendor Bs product, but you need to migrate existing data from vendor A to vendor B. Using Export to image mode allows you to migrate data using Copy Services functions and then return control to the native array, while maintaining access to the hosts. Import to Image mode: This allows you to import an existing storage mdisk or LUN with existing data, from an external storage system without putting metadata on it, so the existing data remains intact. Once imported all copy services functions maybe used to migrate the storage to other locations, while the data remains accessible to your hosts. Volume migration using Volume Mirroring then Split into New Volume: Allows you to leverage the RAID-1 functionality available to create two copies of data that initially have a set relationship (one primary and one secondary) but then break the relationship (both primary and no relationship) and make them independent copies of data. You can use this to migrate data between storage pools and devices. You might use this option if you want to move volumes to multiple different storage pools. Note that only can only mirror one volume at a time. Volume migration using Move to Another Pool: This option allows any volume to be moved between storage pools without interruption to host access. This is effectively a quicker version of the Volume Mirroring Split into New Volume. You might use this option if you want to move volumes in a single step or do not already have a volume mirror copy. Note: While the migration methods listed above do not disrupt access, you will need to take a brief outage to install the host drivers for your SVC. See SC26-7905 IBM System Storage SAN Volume Controller Host Attachment Users Guide for more detail. Make sure to consult the revision of the document that applies for your SVC.
401
Once you do this, you will receive an option to specify a the type of volume mirror to make generic or thin provisioned and select the storage pool to use for the copy as below in. Make sure you select a storage pool with sufficient space and similar performance characteristics. Then select Add Copy.
402
Figure 8-12 Confirm Volume Mirror type and storage pool to use for the mirror
After you create your mirror, you can view the distribution of extents as shown in Figure 8-13 on page 404 or you can view the mirroring progress percentage via Running Tasks as shown in Figure 8-14. Note: Extent distribution for the mirror copy is automatically balanced as well as possible within the storage pool selected.
403
Figure 8-13 The distribution of extents for primary and mirror copy of a volume
Figure 8-14 Progress of a mirror copy creation as viewed via Running Tasks
After the copy completes, you have the option of splitting either copy of the mirror into a new standalone volume. This is shown below in Figure 8-15
404
After you select Split into New Volume on either Copy0 or Copy1 you are presented with the option to specify a new volume name and confirm the split, as shown in Figure 8-16
After providing a new volume name (optional but advised) and confirming the split, you can see the results in Figure 8-17
405
Note: When you split a volume copy, the view of it will return to the pool in which it was created, not where the primary copy existed. If you want to migrate your volumes to another storage pool in one step instead of two, you can use the Migrate to Another Pool option as shown in Figure 8-18
Note: You cannot migrate more than one volume at a time. For this reason Copy Services functions are more expedient if available. 406
IBM System Storage SAN Volume Controller V6.3
If the volume has only one copy you are presented with a storage pool selection dialog. If it has two, you will be presented with a slight variation that allows you to choose the copy to migrate as shown below in Figure 8-19
Note that the selection you are presented with on the above dialog denotes the current pool of each volume copy, so you can better determine which storage pool to use. Finally we explore the image mode import and image mode export. Both of these methods allow you to leverage all copy services functions on storage that contains pre-existing data. In order to import pre-existing storage you must select Pools MDisks by Pool Not in a Pool. And then select the storage you wish to import and right-click. When you do you will be presented with the option in Figure 8-20
407
When you select Import you will receive a dialog that will allow you to import as a generic volume or using thin-provisioning as well as disable the cache if you so choose. This is shown in Figure 8-21
After clicking Next you will be presented with the option to select an existing storage pool in which to place the imported volume. If you do not make a selection it will be imported into a default pool as shown in Figure 8-22
408
To perform an export of a volume, it must be in managed mode not image mode. Select the volume and right-click as shown in
You can export only one volume or copy at a time, and you will need to select a storage pool for it when you export it.
When you click Finish you have exported the volume or copy to image mode. Using this ability you use the SVC as a data mover device to migrate between storage systems.
409
410
411
412
Software level restrictions for Multiple Cluster Mirroring: Partnership between a cluster running 6.1.0 and a cluster running a version earlier than 4.3.1 is not supported. Clusters in a partnership where one cluster is running 6.1.0 and the other is running 4.3.1 cannot participate in additional partnerships with other clusters. Clusters that are all running either 6.1.0 or 5.1.0 can participate in up to three cluster partnerships. To use an IBM Storwize V7000 as a cluster partner, it must have 6.3.0 or newer code and be configured to operate in the replication layer. Layer settings are only available on the V7000.
Note: SVC 6.1 supports object names up to 63 characters. Previous levels only supported up to 15 characters. When SVC 6.1 clusters are partnered with 4.3.1 and 5.1.0 clusters, various object names will be truncated at 15 characters when displayed from 4.3.1 and 5.1.0 clusters.
413
Example: A B, A C, and A D
Figure 8-27 shows four clusters in a star topology, with cluster A at the center. Cluster A can be a central DR site for the three other locations. Using a star topology, you can migrate applications by using a process like the one described in the following example: 1. Suspend application at A. 2. Remove the A B relationship. 3. Create the A C relationship (or alternatively, the B C relationship). 4. Synchronize to cluster C, and ensure A C is established: A B, A C, A D, B C, B D, and C D A B, A C, and B C
414
Example: A B, A C, and B C
Example: A B, A C, A D, B D, and C D
Figure 8-29 is a fully connected mesh where every cluster has a partnership to each of the three other clusters. This allows volumes to be replicated between any pair of clusters.
Example: A B, A C, and B C
Figure 8-30 shows a daisy-chain topology.
415
Note that although clusters can have up to three partnerships, volumes can only be part of one Remote Copy relationship, for example A B. Cluster Partnership Intermix: All of the above topologies are valid for intermix of the IBM Storwize V7000 with the SVC, so long as the V7000 is set to the replication layer and running 6.3.0 code.
Upgrade restriction: Upgrading a cluster to 6.1.0 requires that the partner cluster be running 4.3.1 or later. If the partner cluster is running 4.3.0, it must first be upgraded to 4.3.1.
416
Certain uses of Metro Mirror require manipulation of more than one relationship. Metro Mirror Consistency Groups can provide the ability to group relationships so that they are manipulated in unison. Consider the following points: Metro Mirror relationships can be part of a Consistency Group, or they can be stand-alone and therefore handled as single instances. A Consistency Group can contain zero or more relationships. An empty Consistency Group, with zero relationships in it, has little purpose until it is assigned its first relationship, except that it has a name. All relationships in a Consistency Group must have corresponding master and auxiliary volumes. Although it is possible to use Consistency Groups to manipulate sets of relationships that do not need to satisfy these strict rules, this manipulation can lead to undesired side effects. The rules behind a Consistency Group mean that certain configuration commands are prohibited. These configuration commands are not prohibited if the relationship is not part of a Consistency Group. For example, consider the case of two applications that are completely independent, yet they are placed into a single Consistency Group. In the event of an error there is a loss of synchronization, and a background copy process is required to recover synchronization. While this process is in progress, Metro Mirror rejects attempts to enable access to the auxiliary volumes of either application. If one application finishes its background copy much more quickly than the other application, Metro Mirror still refuses to grant access to its auxiliary volumes even though it is safe in this case, because Metro Mirror policy is to refuse access to the entire Consistency Group if any part of it is inconsistent.
Chapter 8. Advanced Copy Services
417
Stand-alone relationships and Consistency Groups share a common configuration and state model. All of the relationships in a non-empty Consistency Group have the same state as the Consistency Group.
Zoning
SVC node ports on each SVC cluster must be able to communicate with each other for the partnership creation to be performed. Switch zoning is critical to facilitating intercluster communication. See Chapter 3, Planning and configuration on page 67 for critical information regarding proper zoning for intercluster communication.
Intercluster links
All SVC nodes maintain a database of other devices that are visible on the fabric. This database is updated as devices appear and disappear. Devices that advertise themselves as SVC nodes are categorized according to the SVC cluster to which they belong. SVC nodes that belong to the same cluster establish communication channels between themselves and begin to exchange messages to implement clustering and the functional protocols of SVC. Nodes that are in separate clusters do not exchange messages after initial discovery is complete, unless they have been configured together to perform a remote copy relationship. The intercluster link carries control traffic to coordinate activity between two clusters. It is formed between one node in each cluster. The traffic between the designated nodes is distributed among logins that exist between those nodes. If the designated node fails (or all of its logins to the remote cluster fail), then a new node is chosen to carry control traffic. This node change causes the I/O to pause, but it does not put the relationships in a ConsistentStopped state.
418
419
When creating the Metro Mirror relationship, you can specify if the auxiliary volume is already in sync with the master volume, and the background copy process is then skipped. This capability is especially useful when creating Metro Mirror relationships for volumes that have been created with the format option. The step identifiers in Figure 8-32 are described here. Step 1 a. The Metro Mirror relationship is created with the -sync option, and the Metro Mirror relationship enters the ConsistentStopped state. b. The Metro Mirror relationship is created without specifying that the master and auxiliary volumes are in sync, and the Metro Mirror relationship enters the InconsistentStopped state. Step 2
420
a. When starting a Metro Mirror relationship in the ConsistentStopped state, the Metro Mirror relationship enters the ConsistentSynchronized state. Therefore, no updates (write I/O) have been performed on the master volume while in the ConsistentStopped state. Otherwise, the -force option must be specified, and the Metro Mirror relationship then enters the InconsistentCopying state, while the background copy is started. b. When starting a Metro Mirror relationship in the InconsistentStopped state, the Metro Mirror relationship enters the InconsistentCopying state, while the background copy is started. Step 3 When the background copy completes, the Metro Mirror relationship transitions from the InconsistentCopying state to the ConsistentSynchronized state. Step 4 a. When stopping a Metro Mirror relationship in the ConsistentSynchronized state, specifying the -access option, which enables write I/O on the auxiliary volume, the Metro Mirror relationship enters the Idling state. b. To enable write I/O on the auxiliary volume, when the Metro Mirror relationship is in the ConsistentStopped state, issue the command svctask stoprcrelationship specifying the -access option, and the Metro Mirror relationship enters the Idling state. Step 5 a. When starting a Metro Mirror relationship that is in the Idling state, you must specify the -primary argument to set the copy direction. Given that no write I/O has been performed (to either the master or auxiliary volume) while in the Idling state, the Metro Mirror relationship enters the ConsistentSynchronized state. b. If write I/O has been performed to either the master or auxiliary volume, the -force option must be specified, and the Metro Mirror relationship then enters the InconsistentCopying state, while the background copy is started. Stop or Error: When a Metro Mirror relationship is stopped (either intentionally or due to an error), a state transition is applied: For example, the Metro Mirror relationships in the ConsistentSynchronized state enter the ConsistentStopped state, and the Metro Mirror relationships in the InconsistentCopying state enter the InconsistentStopped state. If the connection is broken between the SVC clusters in a partnership, then all (intercluster) Metro Mirror relationships enter a Disconnected state. For further information, refer to Connected versus disconnected on page 421. Common states: Stand-alone relationships and Consistency Groups share a common configuration and state model. All Metro Mirror relationships in a Consistency Group that is not empty have the same state as the Consistency Group.
State overview
in the following sections we provide an overview of the different Metro Mirror states.
421
When the two clusters can communicate, the clusters and the relationships spanning them are described as connected. When they cannot communicate, the clusters and the relationships spanning them are described as disconnected. In this state, both clusters are left with fragmented relationships and will be limited regarding the configuration commands that can be performed. The disconnected relationships are portrayed as having a changed state. The new states describe what is known about the relationship and what configuration commands are permitted. When the clusters can communicate again, the relationships become connected again. Metro Mirror automatically reconciles the two state fragments, taking into account any configuration or other event that took place while the relationship was disconnected. As a result, the relationship can either return to the state that it was in when it became disconnected or enter a new state. Relationships that are configured between volumes in the same SVC cluster (intracluster) will never be described as being in a disconnected state.
422
The application might work without a problem. Because of the risk of data corruption, and in particular undetected data corruption, Metro Mirror strongly enforces the concept of consistency and prohibits access to inconsistent data. Consistency as a concept can be applied to a single relationship or a set of relationships in a Consistency Group. Write ordering is a concept that an application can maintain across a number of disks accessed through multiple systems; therefore, consistency must operate across all those disks. When deciding how to use Consistency Groups, the administrator must consider the scope of an applications data, taking into account all of the interdependent systems that communicate and exchange information. If two programs or systems communicate and store details as a result of the information exchanged, either of the following actions might occur: All of the data accessed by the group of systems must be placed into a single Consistency Group. The systems must be recovered independently (each within its own Consistency Group). Then, each system must perform recovery with the other applications to become consistent with them.
Detailed states
The following sections detail the states that are portrayed to the user, for either Consistency Groups or relationships. It also details additional information that is available in each state. The major states are designed to provide guidance about the configuration commands that are available.
423
InconsistentStopped
InconsistentStopped is a connected state. In this state, the master is accessible for read and write I/O, but the auxiliary is not accessible for either read or write I/O. A copy process needs to be started to make the auxiliary consistent. This state is entered when the relationship or Consistency Group was InconsistentCopying and has either suffered a persistent error or received a stop command that has caused the copy process to stop. A start command causes the relationship or Consistency Group to move to the InconsistentCopying state. A stop command is accepted, but has no effect. If the relationship or Consistency Group becomes disconnected, the auxiliary side transitions to InconsistentDisconnected. The master side transitions to IdlingDisconnected.
InconsistentCopying
InconsistentCopying is a connected state. In this state, the master is accessible for read and write I/O, but the auxiliary is not accessible for either read or write I/O. This state is entered after a start command is issued to an InconsistentStopped relationship or a Consistency Group. It is also entered when a forced start is issued to an Idling or ConsistentStopped relationship or Consistency Group. In this state, a background copy process runs that copies data from the master to the auxiliary volume. In the absence of errors, an InconsistentCopying relationship is active, and the copy progress increases until the copy process completes. In certain error situations, the copy progress might freeze or even regress. A persistent error or stop command places the relationship or Consistency Group into an InconsistentStopped state. A start command is accepted but has no effect. If the background copy process completes on a stand-alone relationship, or on all relationships for a Consistency Group, the relationship or Consistency Group transitions to the ConsistentSynchronized state. If the relationship or Consistency Group becomes disconnected, the auxiliary side transitions to InconsistentDisconnected. The master side transitions to IdlingDisconnected.
ConsistentStopped
ConsistentStopped is a connected state. In this state, the auxiliary contains a consistent image, but it might be out-of-date with respect to the master. This state can arise when a relationship was in a ConsistentSynchronized state and suffers an error that forces a Consistency Freeze. It can also arise when a relationship is created with a CreateConsistentFlag set to TRUE. Normally, following an I/O error, subsequent write activity causes updates to the master and the auxiliary is no longer synchronized (set to false). In this case, to reestablish synchronization, consistency must be given up for a period. You must use a start command with the -force option to acknowledge this condition, and the relationship or Consistency Group transitions to InconsistentCopying. Enter this command only after all outstanding events have been repaired. In the unusual case where the master and the auxiliary are still synchronized (perhaps following a user stop, and no further write I/O was received), a start command takes the
424
relationship to ConsistentSynchronized. No -force option is required. Also, in this case, you can enter a switch command that moves the relationship or Consistency Group to ConsistentSynchronized and reverses the roles of the master and the auxiliary. If the relationship or Consistency Group becomes disconnected, the auxiliary transitions to ConsistentDisconnected. The master transitions to IdlingDisconnected. An informational status log is generated whenever a relationship or Consistency Group enters the ConsistentStopped state with a status of Online. You can configure this event to generate an SNMP trap that can be used to trigger automation or manual intervention to issue a start command following a loss of synchronization.
ConsistentSynchronized
ConsistentSynchronized is a connected state. In this state, the master volume is accessible for read and write I/O, and the auxiliary volume is accessible for read-only I/O. Writes that are sent to the master volume are sent to both the master and auxiliary volumes. Either successful completion must be received for both writes, the write must be failed to the host, or a state must transition out of the ConsistentSynchronized state before a write is completed to the host. A stop command takes the relationship to the ConsistentStopped state. A stop command with the -access parameter takes the relationship to the Idling state. A switch command leaves the relationship in the ConsistentSynchronized state, but it reverses the master and auxiliary roles. A start command is accepted, but it has no effect. If the relationship or Consistency Group becomes disconnected, the same transitions are made as for ConsistentStopped.
Idling
Idling is a connected state. Both master and auxiliary volumes operate in the master role. Consequently, both master and auxiliary volumes are accessible for write I/O. In this state, the relationship or Consistency Group accepts a start command. Metro Mirror maintains a record of regions on each disk that received write I/O while idling. This record is used to determine what areas need to be copied following a start command. The start command must specify the new copy direction. A start command can cause a loss of consistency if either volume in any relationship has received write I/O, which is indicated by the Synchronized status. If the start command leads to loss of consistency, you must specify the -force parameter. Following a start command, the relationship or Consistency Group transitions to ConsistentSynchronized if there is no loss of consistency, or to InconsistentCopying if there is a loss of consistency. Also, while in this state, the relationship or Consistency Group accepts a -clean option on the start command. If the relationship or Consistency Group becomes disconnected, both sides change their state to IdlingDisconnected.
IdlingDisconnected
IdlingDisconnected is a disconnected state. The volume or disks in this half of the relationship or Consistency Group are all in the master role and accept read or write I/O.
425
The priority in this state is to recover the link to restore the relationship or consistency. No configuration activity is possible (except for deletes or stops) until the relationship becomes connected again. At that point, the relationship transitions to a connected state. The exact connected state that is entered depends on the state of the other half of the relationship or Consistency Group, which depends on these factors: The state when it became disconnected The write activity since it was disconnected The configuration activity since it was disconnected If both halves are IdlingDisconnected, the relationship becomes Idling when reconnected. While IdlingDisconnected, if a write I/O is received that causes loss of synchronization (synchronized attribute transitions from true to false) and the relationship was not already stopped (either through a user stop or a persistent error), an event is raised to notify you of the condition. This same event will also be raised when this condition occurs for the ConsistentSynchronized state.
InconsistentDisconnected
InconsistentDisconnected is a disconnected state. The volumes in this half of the relationship or Consistency Group are all in the auxiliary role and do not accept read or write I/O. No configuration activity, except for deletes, is permitted until the relationship becomes connected again. When the relationship or Consistency Group becomes connected again, the relationship becomes InconsistentCopying automatically unless either condition is true: The relationship was InconsistentStopped when it became disconnected. The user issued a stop command while disconnected. In either case, the relationship or Consistency Group becomes InconsistentStopped.
ConsistentDisconnected
ConsistentDisconnected is a disconnected state. The volumes in this half of the relationship or Consistency Group are all in the auxiliary role and accept read I/O but not write I/O. This state is entered from ConsistentSynchronized or ConsistentStopped when the auxiliary side of a relationship becomes disconnected. In this state, the relationship or Consistency Group displays an attribute of FreezeTime, which is the point in time that Consistency was frozen. When entered from ConsistentStopped, it retains the time that it had in that state. When entered from ConsistentSynchronized, the FreezeTime shows the last time at which the relationship or Consistency Group was known to be consistent. This time corresponds to the time of the last successful heartbeat to the other cluster. A stop command with the -access flag set to true transitions the relationship or Consistency Group to the IdlingDisconnected state. This state allows write I/O to be performed to the auxiliary volume and is used as part of a DR scenario. When the relationship or Consistency Group becomes connected again, the relationship or Consistency Group becomes ConsistentSynchronized only if this action does not lead to a loss of consistency. These conditions must be true: The relationship was ConsistentSynchronized when it became disconnected. No writes received successful completion at the master while disconnected.
426
Empty
This state only applies to Consistency Groups. It is the state of a Consistency Group that has no relationships and no other state information to show. It is entered when a Consistency Group is first created. It is exited when the first relationship is added to the Consistency Group, at which point, the state of the relationship becomes the state of the Consistency Group.
Background copy
Metro Mirror paces the rate at which background copy is performed by the appropriate relationships. Background copy takes place on relationships that are in the InconsistentCopying state with a status of Online. The quota of background copy (configured on the intercluster link) is divided evenly between all of the nodes that are performing background copy for one of the eligible relationships. This allocation is made irrespective of the number of disks for which the node is responsible. Each node in turn divides its allocation evenly between the multiple relationships performing a background copy. For intracluster relationships, each node is assigned a static quota of 25 MBps.
427
For example, the Metro Mirror requirement to enable the auxiliary copy for access differentiates it from third-party mirroring software on the host, which aims to emulate a single, reliable disk regardless of what system is accessing it. Metro Mirror retains the property that there are two volumes in existence, but it suppresses one volume while the copy is being maintained. Using an auxiliary copy demands a conscious policy decision by the administrator that a failover is required and that the tasks to be performed on the host involved in establishing operation on the auxiliary copy are substantial. The goal is to make this rapid (much faster when compared to recovering from a backup copy) but not seamless. The failover process can be automated through failover management software. The SVC provides Simple Network Management Protocol (SNMP) traps and programming (or scripting) for the command-line interface (CLI) to enable this automation.
There is a per I/O Group limit of 1024 TB on the quantity of master and auxiliary volume address space that can participate in Metro Mirror and Global Mirror relationships. This maximum configuration will consume all 512 MB of bitmap space for the I/O Group and allow no FlashCopy bitmap space.
Commands to create, delete, and manipulate relationships and Consistency Groups Commands to cause state changes Where a configuration command affects more than one cluster, Metro Mirror performs the work to coordinate configuration activity between the clusters. Certain configuration commands can only be performed when the clusters are connected and fail with no effect when they are disconnected. Other configuration commands are permitted even though the clusters are disconnected. The state is reconciled automatically by Metro Mirror when the clusters become connected again. For any given command, with one exception, a single cluster actually receives the command from the administrator. This design is significant for defining the context for a CreateRelationship mkrcrelationship or CreateConsistencyGroup mkrcconsistgrp command, in which case, the cluster receiving the command is called the local cluster. The exception mentioned previously is the command that sets clusters into a Metro Mirror partnership. The mkpartnership command must be issued to both the local and remote clusters. The commands here are described as an abstract command set and are implemented as either method: A command-line interface (CLI), which can be used for scripting and automation A graphical user interface (GUI), which can be used for one-off tasks
svcinfo lsclustercandidate
The svcinfo lsclustercandidate command is used to list the clusters that are available for setting up a two-cluster partnership. This command is a prerequisite for creating Metro Mirror relationships.
svctask mkpartnership
The svctask mkpartnership command is used to establish a one-way Metro Mirror partnership between the local cluster and a remote cluster. To establish a fully functional Metro Mirror partnership, you must issue this command to both clusters. This step is a prerequisite to creating Metro Mirror relationships between volumes on the SVC clusters. When creating the partnership, you can specify the bandwidth to be used by the background copy process between the local and the remote SVC cluster, and if it is not specified, the bandwidth defaults to 50 MBps. The bandwidth must be set to a value that is less than or equal to the bandwidth that can be sustained by the intercluster link.
429
svctask chpartnership
In case it is needed to change the bandwidth that is available for background copy in an SVC cluster partnership, you can use the svctask chpartnership command to specify the new bandwidth.
svctask mkrcconsistgrp
The svctask mkrcconsistgrp command is used to create a new empty Metro Mirror Consistency Group. The Metro Mirror Consistency Group name must be unique across all of the Consistency Groups that are known to the clusters owning this Consistency Group. If the Consistency Group involves two clusters, the clusters must be in communication throughout the creation process. The new Consistency Group does not contain any relationships and will be in the Empty state. Metro Mirror relationships can be added to the group either upon creation or afterward by using the svctask chrelationship command.
430
svctask mkrcrelationship
The svctask mkrcrelationship command is used to create a new Metro Mirror relationship. This relationship persists until it is deleted. The auxiliary volume must be equal in size to the master volume or the command will fail, and if both volumes are in the same cluster, they must both be in the same I/O Group. The master and auxiliary volume cannot be in an existing relationship and cannot be the target of a FlashCopy mapping. This command returns the new relationship (relationship_id) when successful. When creating the Metro Mirror relationship, it can be added to an already existing Consistency Group, or it can be a stand-alone Metro Mirror relationship if no Consistency Group is specified. To check whether the master or auxiliary volumes comply with the prerequisites to participate in a Metro Mirror relationship, use the svcinfo lsrcrelationshipcandidate command.
svcinfo lsrcrelationshipcandidate
The svcinfo lsrcrelationshipcandidate command is used to list available volumes that are eligible for a Metro Mirror relationship. When issuing the command, you can specify the source volume name and secondary cluster to list candidates that comply with prerequisites to create a Metro Mirror relationship. If the command is issued with no flags, all volumes that are not disallowed by another configuration state, such as being a FlashCopy target, are listed.
svctask chrcrelationship
The svctask chrcrelationship command is used to modify the following properties of a Metro Mirror relationship: Change the name of a Metro Mirror relationship. Add a relationship to a group. Remove a relationship from a group using the -force flag. Adding a Metro Mirror relationship: When adding a Metro Mirror relationship to a Consistency Group that is not empty, the relationship must have the same state and copy direction as the group to be added to it.
svctask chrcconsistgrp
The svctask chrcconsistgrp command is used to change the name of a Metro Mirror Consistency Group.
431
svctask startrcrelationship
The svctask startrcrelationship command is used to start the copy process of a Metro Mirror relationship. When issuing the command, the copy direction can be set, if it is undefined, and optionally mark the auxiliary volume of the relationship as clean. The command fails if it is used to attempt to start a relationship that is part of a Consistency Group. This command can only be issued to a relationship that is connected. For a relationship that is idling, this command assigns a copy direction (master and auxiliary roles) and begins the copy process. Otherwise, this command restarts a previous copy process that was stopped either by a stop command or by an I/O error. If the resumption of the copy process leads to a period when the relationship is inconsistent, you must specify the -force flag when restarting the relationship. This situation can arise if, for example, the relationship was stopped, and then further writes were performed on the original master of the relationship. The use of the -force flag here is a reminder that the data on the auxiliary will become inconsistent while resynchronization (background copying) occurs, and therefore, the data is not usable for DR purposes before the background copy has completed. In the Idling state, you must specify the master volume to indicate the copy direction. In other connected states, you can provide the -primary argument, but it must match the existing setting.
svctask stoprcrelationship
The svctask stoprcrelationship command is used to stop the copy process for a relationship. It can also be used to enable write access to a consistent auxiliary volume by specifying the -access flag. This command applies to a stand-alone relationship. It is rejected if it is addressed to a relationship that is part of a Consistency Group. You can issue this command to stop a relationship that is copying from master to auxiliary. If the relationship is in an inconsistent state, any copy operation stops and does not resume until you issue an svctask startrcrelationship command. Write activity is no longer copied from the master to the auxiliary volume. For a relationship in the ConsistentSynchronized state, this command causes a consistency freeze. When a relationship is in a consistent state (that is, in the ConsistentStopped, ConsistentSynchronized, or ConsistentDisconnected state), you can use the -access parameter with the stoprcrelationship command to enable write access to the auxiliary volume.
432
svctask stoprcconsistgrp
The svctask startrcconsistgrp command is used to stop the copy process for a Metro Mirror Consistency Group. It can also be used to enable write access to the auxiliary volumes in the group if the group is in a consistent state. If the Consistency Group is in an inconsistent state, any copy operation stops and does not resume until you issue the svctask startrcconsistgrp command. Write activity is no longer copied from the master to the auxiliary volumes belonging to the relationships in the group. For a Consistency Group in the ConsistentSynchronized state, this command causes a consistency freeze. When a Consistency Group is in a consistent state (for example, in the ConsistentStopped, ConsistentSynchronized, or ConsistentDisconnected state), the -access argument can be used with the svctask stoprcconsistgrp command to enable write access to the auxiliary volumes within that group.
svctask rmrcrelationship
The svctask rmrcrelationship command is used to delete the relationship that is specified. Deleting a relationship only deletes the logical relationship between the two volumes. It does not affect the volumes themselves. If the relationship is disconnected at the time that the command is issued, the relationship is only deleted on the cluster on which the command is being run. When the clusters reconnect, then the relationship is automatically deleted on the other cluster. Alternatively, if the clusters are disconnected, and you still want to remove the relationship on both clusters, you can issue the rmrcrelationship command independently on both of the clusters. If you delete an inconsistent relationship, the auxiliary volume becomes accessible even though it is still inconsistent. This situation is the one case in which Metro Mirror does not inhibit access to inconsistent data.
433
svctask rmrcconsistgrp
The svctask rmrcconsistgrp command is used to delete a Metro Mirror Consistency Group. This command deletes the specified Consistency Group. You can issue this command for any existing Consistency Group. If the Consistency Group is disconnected at the time that the command is issued, the Consistency Group is only deleted on the cluster on which the command is being run. When the clusters reconnect, the Consistency Group is automatically deleted on the other cluster. Alternatively, if the clusters are disconnected, and you still want to remove the Consistency Group on both clusters, you can issue the svctask rmrcconsistgrp command separately on both of the clusters. If the Consistency Group is not empty, the relationships within it are removed from the Consistency Group before the group is deleted. These relationships then become stand-alone relationships. The state of these relationships is not changed by the action of removing them from the Consistency Group.
svctask switchrcrelationship
The svctask switchrcrelationship command is used to reverse the roles of the master and auxiliary volumes when a stand-alone relationship is in a consistent state. When issuing the command, the desired master is specified.
svctask switchrcconsistgrp
The svctask switchrcconsistgrp command is used to reverse the roles of the master and auxiliary volumes when a Consistency Group is in a consistent state. This change is applied to all of the relationships in the Consistency Group, and when issuing the command, the desired master is specified.
434
Note: The SVC partnership bandwidth limit is specified in megabytes per second and only applies during initial copy or resynchronization. This number is independent of whatever transport method you are using to get data between locations.
435
Figure 8-33 shows that a write operation to the master volume is acknowledged back to the host issuing the write before the write operation is mirrored to the cache for the auxiliary volume.
The Global Mirror algorithms maintain a consistent image on the auxiliary at all times. They achieve this consistent image by identifying sets of I/Os that are active concurrently at the master, assigning an order to those sets, and applying those sets of I/Os in the assigned order at the secondary. As a result, Global Mirror maintains the features of Write Ordering and Read Stability that are described in this chapter. The multiple I/Os within a single set are applied concurrently. The process that marshals the sequential sets of I/Os operates at the secondary cluster, and is therefore not subject to the latency of the long distance link. These two elements of the protocol ensure that the throughput of the total cluster can be grown by increasing cluster size, while maintaining consistency across a growing data set. In a failover scenario, where the secondary site needs to become the master source of data, certain updates might be missing at the secondary site. Therefore, any applications that will use this data must have an external mechanism for recovering the missing updates and reapplying them, for example, such as a transaction log replay.
436
SVC supports intercluster Global Mirror, where each volume belongs to its separate SVC cluster. A given SVC cluster can be configured for partnership with between one and three other clusters. Intercluster and intracluster Global Mirror can be used concurrently within a cluster for separate relationships. SVC does not require a control network or fabric to be installed to manage Global Mirror. For intercluster Global Mirror, the SVC maintains a control link between the two clusters. This control link is used to control the state and to coordinate the updates at either end. The control link is implemented on top of the same FC fabric connection that the SVC uses for Global Mirror I/O. SVC implements a configuration model that maintains the Global Mirror configuration and state through major events, such as failover, recovery, and resynchronization, to minimize user configuration action through these events. SVC implements flexible resynchronization support, enabling it to resynchronize volume pairs that have experienced write I/Os to both disks and to resynchronize only those regions that are known to have changed. An optional feature for Global Mirror permits a delay simulation to be applied on writes that are sent to auxiliary volumes. As of 6.3.0, Global Mirror source and target volumes maybe associated with change volumes.
Colliding writes
Prior to V4.3.1, the Global Mirror algorithm required that only a single write is active on any given 512 byte LBA of a volume. If a further write is received from a host while the auxiliary write is still active, even though the master write might have completed, the new host write will be delayed until the auxiliary write is complete. This restriction is needed in case a series of writes to the auxiliary have to be retried (called reconstruction). Conceptually, the data for reconstruction comes from the master volume. If multiple writes are allowed to be applied to the master for a given sector, only the most recent write will get the correct data during reconstruction, and if reconstruction is interrupted for any reason, the intermediate state of the auxiliary is inconsistent. Applications that deliver such write activity will not achieve the performance that Global Mirror is intended to support. A volume statistic is maintained about the frequency of these collisions. From V4.3.1 onward, an attempt is made to allow multiple writes to a single location to be outstanding in the Global Mirror algorithm. There is still a need for master writes to be serialized, and the intermediate states of the master data must be kept in a non-volatile journal while the writes are outstanding to maintain the correct write ordering during reconstruction. Reconstruction must never overwrite data on the auxiliary with an earlier version. The volume statistic monitoring colliding writes is now limited to those writes that are not affected by this change. Figure 8-34 on page 438 shows a colliding write sequence example.
437
These numbers correspond to the numbers in Figure 8-34: (1) First write is performed from the host to LBA X. (2) Host is provided acknowledgment that the write it complete even though the mirrored write to the auxiliary volume has note yet completed. (1) and (2) occur asynchronously with the first write. (3) Second write is performed from host also to LBA X, if this write occurs prior to (2) the write will be written to the journal file. (4) Host is provided acknowledgment that the second write is complete.
Delay simulation
An optional feature for Global Mirror permits a delay simulation to be applied on writes that are sent to auxiliary volumes. This feature allows testing to be performed that detects colliding writes, and therefore, this feature can be used to test an application before the full deployment of the feature. The feature can be enabled separately for each of the intracluster or intercluster Global Mirrors. You specify the delay setting by using the chcluster command and viewed by using the lscluster command. The gm_intra_delay_simulation field expresses the amount of time that intracluster auxiliary I/Os are delayed. The gm_inter_delay_simulation field expresses the amount of time that intercluster auxiliary I/Os are delayed. A value of zero (0) disables the feature. Tip: If you are experiencing repeated problems with the delay on your link, make sure the delay simulator was properly disabled.
In the most common applications of Global Mirror, the master volume contains the production copy of the data and is used by the host application. The auxiliary volume contains the mirrored copy of the data and is used for failover in DR scenarios. Due to the nature of consistency requirements and SCSI protocol standards, the auxiliary or target volume cannot be actively in use while the Global Mirror relationship is actively copying data. Notes: A volume can only be part of one Global Mirror relationship at a time. As of SVC 6.2.0.0, a volume that is a FlashCopy target can be part of a Global Mirror relationship.
439
With Change Volumes, a FlashCopy mapping exists between the primary volume and the primary Change Volume. The mapping is updated on the cycling period (60 seconds to 1 Day.) The primary Change Volume is then replicated to the secondary Global Mirror volume at the target site, which is then captured in another change volume on the target site. This provides an always consistent image at the target site and protects your data from being inconsistent during resynchronization. Lets take a closer look at how change volumes might save you replication traffic.
440
In Figure 8-37 you can see a number of IOs on the Source and the same number on the Target, and in the same order. Assuming this is the same set of data being updated over and over, then this is wasted network traffic and the IO can be completed much more efficiently as shown in Figure 8-38
In Figure 8-38 the same data is being updated repeatedly, so Change Volumes demonstrate significant IO transmission savings, by needed to only send IO number 16, which was the last IO before the cycling period. The cycling period can be adjusted with the chrcrelationship -cycleperiodseconds <60-86400> command from the CLI. If a copy does not complete in the cycle period, the next cycle will not start until the prior one has completed. It is for this reason that using change volumes gives you two possibilities for RPO. 1. If your replication completes in the cycling period, then your RPO is twice the cycling period. 2. If your replication does not complete within the cycling period, then your RPO is twice the completion time. The next cycling period will start immediately after the prior one is finished. Careful consideration should be put in your business requirements versus the performance of Global Mirror with Change Volumes. Global mirror with change volumes does increase the inter-cluster traffic for more frequent cycling periods. So going as short as possible isnt always the answer. In most cases the default should meet requirements and perform reasonably well.
441
Note: When making your Global Mirror volumes with Change Volumes, make sure that you remember to select the Change Volume on the auxiliary (target) site. Failure to do so will leave you exposed during a resynchronization operation.
Important: The GUI for 6.3.0 will automatically create Change Volumes for you. However, it is a limitation of the this initial release that they are fully-provisioned volumes. In order to save space you should create thin-provisioned volumes before hand and use the existing volume option for selecting your change volumes.
442
Figure 8-39 on page 443 illustrates the concept of Global Mirror Consistency Groups. Because GM_Relationship 1 and GM_Relationship 2 are part of the Consistency Group, they can be handled as one entity. The stand-alone GM_Relationship 3 is handled separately.
Certain uses of Global Mirror require the manipulation of more than one relationship. Global Mirror Consistency Groups can provide the ability to group relationships so that they are manipulated in unison. Global Mirror relationships within a Consistency Group can be in any form: Global Mirror relationships can be part of a Consistency Group, or be stand-alone and therefore handled as single instances. A Consistency Group can contain zero (0) or more relationships. An empty Consistency Group, with zero relationships in it, has little purpose until it is assigned its first relationship, except that it has a name. All of the relationships in a Consistency Group must have matching master and auxiliary volumes. Although it is possible to use Consistency Groups to manipulate sets of relationships that do not need to satisfy these strict rules, such manipulation can lead to undesired side effects. The rules behind a Consistency Group mean that certain configuration commands are prohibited. These specific configuration commands are not prohibited if the relationship is not part of a Consistency Group. For example, consider the case of two applications that are completely independent, yet they are placed into a single Consistency Group. If a loss of synchronization were to occur, and a background copy process is required to recover synchronization, then while this process is in progress Global Mirror rejects attempts to enable access to the auxiliary volumes of either application.
443
If one application finishes its background copy before the other, Global Mirror still refuses to grant access to its auxiliary volume. Even though it is safe in this case, Global Mirror policy refuses access to the entire Consistency Group if any part of it is inconsistent. Stand-alone relationships and Consistency Groups share a common configuration and state model. All of the relationships in a Consistency Group that is not empty have the same state as the Consistency Group.
3. To manage multiple Global Mirror relationships as one entity, the relationships can be made part of a Global Mirror Consistency Group to ensure data consistency across multiple Global Mirror relationships, or simply for ease of management. 4. The Global Mirror relationship is started, and when the background copy has completed, the relationship is consistent and synchronized. 5. When synchronized, the auxiliary volume holds a copy of the production data at the master that can be used for DR. 6. To access the auxiliary volume, the Global Mirror relationship must be stopped with the access option enabled, before write I/O is submitted to the auxiliary. 7. The remote host server is mapped to the auxiliary volume, and the disk is available for I/O.
445
When creating the Global Mirror relationship, you can specify whether the auxiliary volume is already in sync with the master volume, and the background copy process is then skipped. This capability is especially useful when creating Global Mirror relationships for volumes that have been created with the format option. The following steps explain the Global Mirror state diagram (these numbers correspond to the numbers in Figure 8-40 on page 446): Step 1 a. The Global Mirror relationship is created with the -sync option, and the Global Mirror relationship enters the ConsistentStopped state. b. The Global Mirror relationship is created without specifying that the master and auxiliary volumes are in sync, and the Global Mirror relationship enters the InconsistentStopped state. Step 2 a. When starting a Global Mirror relationship in the ConsistentStopped state, it enters the ConsistentSynchronized state. This state implies that no updates (write I/O) have been performed on the master volume while in the ConsistentStopped state. Otherwise, you must specify the -force option, and the Global Mirror relationship then enters the InconsistentCopying state while the background copy is started. b. When starting a Global Mirror relationship in the InconsistentStopped state, it enters the InconsistentCopying state while the background copy is started.
446
Step 3 a. When the background copy completes, the Global Mirror relationship transitions from the InconsistentCopying state to the ConsistentSynchronized state. Step 4 a. When stopping a Global Mirror relationship in the ConsistentSynchronized state, where specifying the -access option enables write I/O on the auxiliary volume, the Global Mirror relationship enters the Idling state. b. To enable write I/O on the auxiliary volume, when the Global Mirror relationship is in the ConsistentStopped state, issue the command svctask stoprcrelationship, specifying the -access option, and the Global Mirror relationship enters the Idling state. Step 5 a. When starting a Global Mirror relationship that is in the Idling state, you must specify the -primary argument to set the copy direction. Because no write I/O has been performed (to either the master or auxiliary volume) while in the Idling state, the Global Mirror relationship enters the ConsistentSynchronized state. b. In case write I/O has been performed to either the master or the auxiliary volume, then you must specify the -force option. The Global Mirror relationship then enters the InconsistentCopying state, while the background copy is started. If the Global Mirror relationship is intentionally stopped or experiences an error, a state transition is applied. For example, Global Mirror relationships in the ConsistentSynchronized state enter the ConsistentStopped state, and Global Mirror relationships in the InconsistentCopying state enter the InconsistentStopped state. In a case where the connection is broken between the SVC clusters in a partnership, all of the (intercluster) Global Mirror relationships enter a Disconnected state. For further information, refer to Connected versus disconnected on page 447. Common configuration and state model: Stand-alone relationships and Consistency Groups share a common configuration and state model. All of the Global Mirror relationships in a Consistency Group that is not empty have the same state as the Consistency Group.
State overview
The SVC defined concepts of state are key to understanding the configuration concepts. We explain them in more detail here.
447
In this scenario, each cluster is left with half of the relationship, and each cluster has only a portion of the information that was available to it before. Only a subset of the normal configuration activity is available. The disconnected relationships are portrayed as having a changed state. The new states describe what is known about the relationship and which configuration commands are permitted. When the clusters can communicate again, the relationships become connected again. Global Mirror automatically reconciles the two state fragments, taking into account any configuration activity or other event that took place while the relationship was disconnected. As a result, the relationship can either return to the state that it was in when it became disconnected or it can enter another connected state. Relationships that are configured between volumes in the same SVC cluster (intracluster) will never be described as being in a disconnected state.
448
Because of the risk of data corruption, and, in particular, undetected data corruption, Global Mirror strongly enforces the concept of consistency and prohibits access to inconsistent data. You can apply consistency as a concept to a single relationship or to a set of relationships in a Consistency Group. Write ordering is a concept that an application can maintain across a number of disks that are accessed through multiple systems, and therefore, consistency must operate across all of those disks. When deciding how to use Consistency Groups, the administrator must consider the scope of an applications data, taking into account all of the interdependent systems that communicate and exchange information. If two programs or systems communicate and store details as a result of the information exchanged, either of the following actions might occur: All of the data that is accessed by the group of systems must be placed into a single Consistency Group. The systems must be recovered independently (each within its own Consistency Group). Then, each system must perform recovery with the other applications to become consistent with them.
Detailed states
The following sections detail the states that are portrayed to the user, for either Consistency Groups or relationships. It also details the extra information that is available in each state. We described the various major states to provide guidance regarding the available configuration commands.
InconsistentStopped
InconsistentStopped is a connected state. In this state, the master is accessible for read and write I/O, but the auxiliary is inaccessible for either read or write I/O. A copy process needs to be started to make the auxiliary consistent.
449
This state is entered when the relationship or Consistency Group was InconsistentCopying and has either suffered a persistent error or received a stop command that has caused the copy process to stop. A start command causes the relationship or Consistency Group to move to the InconsistentCopying state. A stop command is accepted, but has no effect. If the relationship or Consistency Group becomes disconnected, the auxiliary side transitions to InconsistentDisconnected. The master side transitions to IdlingDisconnected.
InconsistentCopying
InconsistentCopying is a connected state. In this state, the master is accessible for read and write I/O, but the auxiliary is inaccessible for either read or write I/O. This state is entered after a start command is issued to an InconsistentStopped relationship or Consistency Group. It is also entered when a forced start is issued to an Idling or ConsistentStopped relationship or Consistency Group. In this state, a background copy process runs, which copies data from the master to the auxiliary volume. In the absence of errors, an InconsistentCopying relationship is active, and the copy progress increases until the copy process completes. In certain error situations, the copy progress might freeze or even regress. A persistent error or stop command places the relationship or Consistency Group into the InconsistentStopped state. A start command is accepted, but has no effect. If the background copy process completes on a stand-alone relationship, or on all relationships for a Consistency Group, the relationship or Consistency Group transitions to the ConsistentSynchronized state. If the relationship or Consistency Group becomes disconnected, the auxiliary side transitions to InconsistentDisconnected. The master side transitions to IdlingDisconnected.
ConsistentStopped
ConsistentStopped is a connected state. In this state, the auxiliary contains a consistent image, but it might be out-of-date with respect to the master. This state can arise when a relationship is in the ConsistentSynchronized state and experiences an error that forces a Consistency Freeze. It can also arise when a relationship is created with a CreateConsistentFlag set to true. Normally, following an I/O error, subsequent write activity causes updates to the master, and the auxiliary is no longer synchronized (set to false). In this case, to reestablish synchronization, consistency must be given up for a period. A start command with the -force option must be used to acknowledge this situation, and the relationship or Consistency Group transitions to InconsistentCopying. Issue this command only after all of the outstanding events are repaired. In the unusual case where the master and auxiliary are still synchronized (perhaps following a user stop, and no further write I/O was received), a start command takes the relationship to ConsistentSynchronized. No -force option is required. Also, in this unusual case, a switch command is permitted that moves the relationship or Consistency Group to ConsistentSynchronized and reverses the roles of the master and the auxiliary.
450
If the relationship or Consistency Group becomes disconnected, then the auxiliary side transitions to ConsistentDisconnected. The master side transitions to IdlingDisconnected. An informational status log is generated every time a relationship or Consistency Group enters the ConsistentStopped with a status of Online state. This can be configured to enable an SNMP trap and provide a trigger to automation software to consider issuing a start command following a loss of synchronization.
ConsistentSynchronized
This is a connected state. In this state, the master volume is accessible for read and write I/O. The auxiliary volume is accessible for read-only I/O. Writes that are sent to the master volume are sent to both master and auxiliary volumes. Either successful completion must be received for both writes; the write must be failed to the host; or a state must transition out of the ConsistentSynchronized state before a write is completed to the host. A stop command takes the relationship to the ConsistentStopped state. A stop command with the -access parameter takes the relationship to the Idling state. A switch command leaves the relationship in the ConsistentSynchronized state, but reverses the master and auxiliary roles. A start command is accepted, but has no effect. If the relationship or Consistency Group becomes disconnected, the same transitions are made as for ConsistentStopped.
Idling
Idling is a connected state. Both master and auxiliary disks are operating in the master role. Consequently, both master and auxiliary disks are accessible for write I/O. In this state, the relationship or Consistency Group accepts a start command. Global Mirror maintains a record of regions on each disk that received write I/O while Idling. This record is used to determine what areas need to be copied following a start command. The start command must specify the new copy direction. A start command can cause a loss of consistency if either volume in any relationship has received write I/O, which is indicated by the synchronized status. If the start command leads to loss of consistency, you must specify a -force parameter. Following a start command, the relationship or Consistency Group transitions to ConsistentSynchronized if there is no loss of consistency, or to InconsistentCopying if there is a loss of consistency. Also, while in this state, the relationship or Consistency Group accepts a -clean option on the start command. If the relationship or Consistency Group becomes disconnected, both sides change their state to IdlingDisconnected.
IdlingDisconnected
IdlingDisconnected is a disconnected state. The volume or disks in this half of the relationship or Consistency Group are all in the master role and accept read or write I/O. The major priority in this state is to recover the link and reconnect the relationship or Consistency Group.
451
No configuration activity is possible (except for deletes or stops) until the relationship is reconnected. At that point, the relationship transitions to a connected state. The exact connected state that is entered depends on the state of the other half of the relationship or Consistency Group, which depends on these factors: The state when it became disconnected The write activity since it was disconnected The configuration activity since it was disconnected If both halves are IdlingDisconnected, the relationship becomes Idling when reconnected. While IdlingDisconnected, if a write I/O is received that causes loss of synchronization (synchronized attribute transitions from true to false) and the relationship was not already stopped (either through a user stop or a persistent error), an event is raised. This same event will also be raised when this condition occurs for the ConsistentSynchronized state.
InconsistentDisconnected
InconsistentDisconnected is a disconnected state. The volumes in this half of the relationship or Consistency Group are all in the auxiliary role and do not accept read or write I/O. No configuration activity, except for deletes, is permitted until the relationship reconnects. When the relationship or Consistency Group reconnects, the relationship becomes InconsistentCopying automatically unless either of these conditions exist: The relationship was InconsistentStopped when it became disconnected. The user issued a stop while disconnected. In either case, the relationship or Consistency Group becomes InconsistentStopped.
ConsistentDisconnected
ConsistentDisconnected is a disconnected state. The volumes in this half of the relationship or Consistency Group are all in the auxiliary role and accept read I/O but not write I/O. This state is entered from ConsistentSynchronized or ConsistentStopped when the auxiliary side of a relationship becomes disconnected. In this state, the relationship or Consistency Group displays an attribute of FreezeTime, which is the point in time that Consistency was frozen. When entered from ConsistentStopped, it retains the time that it had in that state. When entered from ConsistentSynchronized, the FreezeTime shows the last time at which the relationship or Consistency Group was known to be consistent. This time corresponds to the time of the last successful heartbeat to the other cluster. A stop command with the -access flag set to true transitions the relationship or Consistency Group to the IdlingDisconnected state. This state allows write I/O to be performed to the auxiliary volume and is used as part of a DR scenario. When the relationship or Consistency Group reconnects, the relationship or Consistency Group becomes ConsistentSynchronized only if this state does not lead to a loss of consistency. This is the case provided that these conditions are true: The relationship was ConsistentSynchronized when it became disconnected. No writes received successful completion at the master while disconnected. Otherwise, the relationship becomes ConsistentStopped. The FreezeTime setting is retained.
452
Empty
This state only applies to Consistency Groups. It is the state of a Consistency Group that has no relationships and no other state information to show. It is entered when a Consistency Group is first created. It is exited when the first relationship is added to the Consistency Group, at which point, the state of the relationship becomes the state of the Consistency Group.
453
A per I/O Group limit of 1024 TB exists on the quantity of master and auxiliary volume address spaces that can participate in Metro Mirror and Global Mirror relationships. This maximum configuration will consume 512 MB of bitmap space for the I/O Group and allow 10 MB of space for all remaining copy services features.
454
svcinfo lsclustercandidate
Use the svcinfo lsclustercandidate command to list the clusters that are available for setting up a two-cluster partnership. This command is a prerequisite for creating Global Mirror relationships. To display the characteristics of the cluster, use the svcinfo lscluster command, specifying the name of the cluster.
svctask chcluster
There are three parameters for Global Mirror in the command output: -gmlinktolerance link_tolerance This parameter specifies the maximum period of time that the system will tolerate delay before stopping Global Mirror relationships. Specify values between 60 and 86400 seconds in increments of 10 seconds. The default value is 300. Do not change this value except under the direction of IBM Support. -relationshipbandwidthlimit cluster_relationship_bandwidth_limit This parameter controls the maximum rate at which any one remote copy relationship can synchronize. The default value for the relationship bandwidth limit is 25 MBps, but this value can now be specified between 1 MBps to 1000 MBps. Note that the partnership overall limit is controlled by the chpartnership -bandwidth command and must be set on each involved cluster accordingly. Attention: Do not set this value higher than the default without first establishing that the higher bandwidth can be sustained without impacting host performance. The limit should never be above the maximum support by the infrastructure connecting the remote sites regardless of the compression rates you may achieve. -gminterdelaysimulation link_tolerance This parameter specifies the number of milliseconds that I/O activity (intercluster copying to a auxiliary volume) is delayed. This parameter permits you to test performance implications before deploying Global Mirror and obtaining a long distance link. Specify a value from 0 to 100 milliseconds in 1 millisecond increments. The default value is 0. Use this argument to test each intercluster Global Mirror relationship separately. -gmintradelaysimulation link_tolerance This parameter specifies the number of milliseconds that I/O activity (intracluster copying to a auxiliary volume) is delayed. This parameter permits you to test performance implications before deploying Global Mirror and obtaining a long distance link. Specify a value from 0 to 100 milliseconds in 1 millisecond increments. The default value is 0. Use this argument to test each intracluster Global Mirror relationship separately. Use the svctask chcluster command to adjust these values; see the following example: svctask chcluster -gmlinktolerance 300 You can view all of these parameter values with the svcinfo lscluster <clustername> command.
455
gmlinktolerance
The gmlinktolerance parameter needs a particular and detailed note. If poor response extends past the specified tolerance, a 1920 event is logged and one or more Global Mirror relationships are automatically stopped, which protects the application hosts at the primary site. During normal operation, application hosts experience a minimal effect from the response times, because the Global Mirror feature uses asynchronous replication. However, if Global Mirror operations experience degraded response times from the secondary cluster for an extended period of time, I/O operations begin to queue at the primary cluster. This queue results in an extended response time to application hosts. In this situation, the gmlinktolerance feature stops Global Mirror relationships and the application hosts response time returns to normal. After a 1920 event has occurred, the Global Mirror auxiliary volumes are no longer in the consistent_synchronized state until you fix the cause of the event and restart your Global Mirror relationships. For this reason, ensure that you monitor the cluster to track when this 1920 events occur. You can disable the gmlinktolerance feature by setting the gmlinktolerance value to 0 (zero). However, the gmlinktolerance feature cannot protect applications from extended response times if it is disabled. It might be appropriate to disable the gmlinktolerance feature under the following circumstances: During SAN maintenance windows where degraded performance is expected from SAN components and application hosts can withstand extended response times from Global Mirror volumes. During periods when application hosts can tolerate extended response times and it is expected that the gmlinktolerance feature might stop the Global Mirror relationships. For example, if you test using an I/O generator, which is configured to stress the back-end storage, the gmlinktolerance feature might detect the high latency and stop the Global Mirror relationships. Disabling the gmlinktolerance feature prevents this result at the risk of exposing the test host to extended response times. We suggest using a script to periodically monitor the Global Mirror status. Example 8-2 shows an example of a script in ksh to check the Global Mirror status.
Example 8-2 Script example
[AIX1@root] /usr/GMC > cat checkSVCgm #!/bin/sh # # Description # # GM_STATUS GM Status variable # HOSTsvcNAME SVC cluster ipaddress # PARA_TEST Consistent syncronized variable # PARA_TESTSTOPIN Stop inconsistent variable # PARA_TESTSTOP Consistent stopped variable # IDCONS Consistency Group ID variable # variable definition HOSTsvcNAME="128.153.3.237" IDCONS=255 PARA_TEST="consistent_synchronized" PARA_TESTSTOP="consistent_stopped" PARA_TESTSTOPIN="inconsistent_stopped" FLOG="/usr/GMC/log/gmtest.log" VAR=0 456
IBM System Storage SAN Volume Controller V6.3
# Start Programm if [[ $1 == "" ]] then CICLI="true" fi while $CICLI do GM_STATUS=`ssh -l admin $HOSTsvcNAME svcinfo lsrcconsistgrp -delim : | awk -F: 'NR==2 {print $8 }'` echo "`date` Gobal Mirror STATUS <$GM_STATUS> " >> $FLOG if [[ $GM_STATUS = $PARA_TEST ]] then sleep 600 else sleep 600 GM_STATUS=`ssh -l admin $HOSTsvcNAME svcinfo lsrcconsistgrp -delim : | awk -F: 'NR==2 {print $8 }'` if [[ $GM_STATUS = $PARA_TESTSTOP || $GM_STATUS = $PARA_TESTSTOPIN ]] then ssh -l admin $HOSTsvcNAME svctask startrcconsistgrp -force $IDCONS TESTEX=`echo $?` echo "`date` Gobal Mirror RESTARTED.......... con RC=$TESTEX " >> $FLOG fi GM_STATUS=`ssh -l admin $HOSTsvcNAME svcinfo lsrcconsistgrp -delim : | awk -F: 'NR==2 {print $8 }'` if [[ $GM_STATUS = $PARA_TESTSTOP ]] then echo "`date` Global Mirror restarted <$GM_STATUS>" else echo "`date` ERROR Global Mirro Failed <$GM_STATUS>" fi sleep 600 fi ((VAR+=1)) done PARA_TESTSTOP="consistent_stopped" The script in Example 8-2 on page 456 performs these functions: Check the Global Mirror status every 600 seconds. If the status is ConsistentSyncronized, wait another 600 seconds and test again. If the status is ConsistentStopped or InconsistentStopped, wait another 600 seconds and then try to restart Global Mirror. If the status remains ConsistentStopped or InconsistentStopped, it is likely that an associated 1920 event exists, which means that we might have a performance problem. Waiting 600 seconds before restarting Global Mirror can give the SVC enough time to deliver the high workload that is requested by the server. Because Global Mirror has been stopped for 10 minutes (600 seconds), the auxiliary copy is now out of date by this amount of time and must be resynchronized. Sample script: The script described in Example 8-2 on page 456 is supplied as is.
457
A 1920 event indicates that one or more of the SAN components are unable to provide the performance that is required by the application hosts. This situation can be temporary (for example, a result of a maintenance activity) or permanent (for example, a result of a hardware failure or an unexpected host I/O workload). If 1920 events are occurring, it can be necessary to use a performance monitoring and analysis tool, such as the IBM Tivoli Storage Productivity Center, to assist in identifying and resolving the problem.
svctask mkpartnership
Use the svctask mkpartnership command to establish a one-way Global Mirror partnership between the local cluster and a remote cluster. To establish a fully functional Global Mirror partnership, you must issue this command on both clusters. This step is a prerequisite for creating Global Mirror relationships between volumes on the SVC clusters. When creating the partnership, you can specify the bandwidth to be used by the background copy process between the local and the remote SVC cluster, and if it is not specified, the bandwidth defaults to 50 MBps. The bandwidth must be set to a value that is less than or equal to the bandwidth that can be sustained by the intercluster link.
svctask chpartnership
To change the bandwidth that is available for background copy in an SVC cluster partnership, use the svctask chpartnership command to specify the new bandwidth.
458
svctask mkrcconsistgrp
Use the svctask mkrcconsistgrp command to create a new, empty Global Mirror Consistency Group. The Global Mirror Consistency Group name must be unique across all Consistency Groups that are known to the clusters owning this Consistency Group. If the Consistency Group involves two clusters, the clusters must be in communication throughout the creation process. The new Consistency Group does not contain any relationships and will be in the Empty state. You can add Global Mirror relationships to the group, either upon creation or afterward, by using the svctask chrelationship command.
svctask mkrcrelationship
Use the svctask mkrcrelationship command to create a new Global Mirror relationship. This relationship persists until it is deleted. The auxiliary volume must be equal in size to the master volume or the command will fail, and if both volumes are in the same cluster, they must both be in the same I/O Group. The master and auxiliary volume cannot be in an existing relationship, and they cannot be the target of a FlashCopy mapping. This command returns the new relationship (relationship_id) when successful. When creating the Global Mirror relationship, you can add it to a Consistency Group that already exists, or it can be a stand-alone Global Mirror relationship if no Consistency Group is specified. To check whether the master or auxiliary volumes comply with the prerequisites to participate in a Global Mirror relationship, use the svcinfo lsrcrelationshipcandidate command, as shown in svcinfo lsrcrelationshipcandidate on page 459.
svcinfo lsrcrelationshipcandidate
Use the svcinfo lsrcrelationshipcandidate command to list the available volumes that are eligible to form a Global Mirror relationship. When issuing the command, you can specify the master volume name and auxiliary cluster to list candidates that comply with the prerequisites to create a Global Mirror relationship. If the command is issued with no parameters, all volumes that are not disallowed by another configuration state, such as being a FlashCopy target, are listed.
459
svctask chrcrelationship
Use the svctask chrcrelationship command to modify the following properties of a Global Mirror relationship: Change the name of a Global Mirror relationship. Add a relationship to a group. Remove a relationship from a group using the -force flag. Adding a Global Mirror relationship: When adding a Global Mirror relationship to a Consistency Group that is not empty, the relationship must have the same state and copy direction as the group to be added to it.
svctask chrcconsistgrp
Use the svctask chrcconsistgrp command to change the name of a Global Mirror Consistency Group.
svctask startrcrelationship
Use the svctask startrcrelationship command to start the copy process of a Global Mirror relationship. When issuing the command, you can set the copy direction if it is undefined, and, optionally, you can mark the auxiliary volume of the relationship as clean. The command fails if it is used as an attempt to start a relationship that is already a part of a Consistency Group. You can only issue this command to a relationship that is connected. For a relationship that is idling, this command assigns a copy direction (master and auxiliary roles) and begins the copy process. Otherwise, this command restarts a previous copy process that was stopped either by a stop command or by an I/O error. If the resumption of the copy process leads to a period when the relationship is inconsistent, you must specify the -force parameter when restarting the relationship. This situation can arise if, for example, the relationship was stopped and then further writes were performed on the original master of the relationship. The use of the -force parameter here is a reminder that the data on the auxiliary will become inconsistent while resynchronization (background copying) takes place and, therefore, is unusable for DR purposes before the background copy has completed. In the Idling state, you must specify the master volume to indicate the copy direction. In other connected states, you can provide the -primary argument, but it must match the existing setting.
460
svctask stoprcrelationship
Use the svctask stoprcrelationship command to stop the copy process for a relationship. You can also use this command to enable write access to a consistent auxiliary volume by specifying the -access parameter. This command applies to a stand-alone relationship. It is rejected if it is addressed to a relationship that is part of a Consistency Group. You can issue this command to stop a relationship that is copying from master to auxiliary. If the relationship is in an inconsistent state, any copy operation stops and does not resume until you issue an svctask startrcrelationship command. Write activity is no longer copied from the master to the auxiliary volume. For a relationship in the ConsistentSynchronized state, this command causes a Consistency Freeze. When a relationship is in a consistent state (that is, in the ConsistentStopped, ConsistentSynchronized, or ConsistentDisconnected state), you can use the -access parameter with the svctask stoprcrelationship command to enable write access to the auxiliary volume.
svctask startrcconsistgrp
Use the svctask startrcconsistgrp command to start a Global Mirror Consistency Group. You can only issue this command to a Consistency Group that is connected. For a Consistency Group that is idling, this command assigns a copy direction (master and auxiliary roles) and begins the copy process. Otherwise, this command restarts a previous copy process that was stopped either by a stop command or by an I/O error.
svctask stoprcconsistgrp
Use the svctask startrcconsistgrp command to stop the copy process for a Global Mirror Consistency Group. You can also use this command to enable write access to the auxiliary volumes in the group if the group is in a consistent state. If the Consistency Group is in an inconsistent state, any copy operation stops and does not resume until you issue the svctask startrcconsistgrp command. Write activity is no longer copied from the master to the auxiliary volumes, which belong to the relationships in the group. For a Consistency Group in the ConsistentSynchronized state, this command causes a Consistency Freeze. When a Consistency Group is in a consistent state (for example, in the ConsistentStopped, ConsistentSynchronized, or ConsistentDisconnected state), you can use the -access parameter with the svctask stoprcconsistgrp command to enable write access to the auxiliary volumes within that group.
461
svctask rmrcrelationship
Use the svctask rmrcrelationship command to delete the relationship that is specified. Deleting a relationship only deletes the logical relationship between the two volumes. It does not affect the volumes themselves. If the relationship is disconnected at the time that the command is issued, the relationship is only deleted on the cluster on which the command is being run. When the clusters reconnect, the relationship is automatically deleted on the other cluster. Alternatively, if the clusters are disconnected, and you still want to remove the relationship on both clusters, you can issue the rmrcrelationship command independently on both of the clusters. A relationship cannot be deleted if it is part of a Consistency Group. You must first remove the relationship from the Consistency Group. If you delete an inconsistent relationship, the auxiliary volume becomes accessible even though it is still inconsistent. This situation is the one case in which Global Mirror does not inhibit access to inconsistent data.
svctask rmrcconsistgrp
Use the svctask rmrcconsistgrp command to delete a Global Mirror Consistency Group. This command deletes the specified Consistency Group. You can issue this command for any existing Consistency Group. If the Consistency Group is disconnected at the time that the command is issued, the Consistency Group is only deleted on the cluster on which the command is being run. When the clusters reconnect, the Consistency Group is automatically deleted on the other cluster. Alternatively, if the clusters are disconnected, and you still want to remove the Consistency Group on both clusters, you can issue the svctask rmrcconsistgrp command separately on both of the clusters. If the Consistency Group is not empty, the relationships within it are removed from the Consistency Group before the group is deleted. These relationships then become stand-alone relationships. The state of these relationships is not changed by the action of removing them from the Consistency Group.
svctask switchrcrelationship
Use the svctask switchrcrelationship command to reverse the roles of the master volume and the auxiliary volume when a stand-alone relationship is in a consistent state; when issuing the command, the desired master needs to be specified.
svctask switchrcconsistgrp
Use the svctask switchrcconsistgrp command to reverse the roles of the master volume and the auxiliary volume when a Consistency Group is in a consistent state. This change is applied to all of the relationships in the Consistency Group, and when issuing the command, the desired master needs to be specified.
463
limit as possible. You can easily double or triple the expected physical latency with a lower quality or lower bandwidth network link and suddenly you are within range of exceeding the limit the moment a large flood of I/O happens that exceeds the bandwidth capacity you have in place. When you get a 1920, always check the latency first. Keep in mind that the FCIP routing layer can introduce latency if not properly configured. If your network provider reports a much lower latency, this could be an indication of a problem at your FCIP Routing layer. Most FCIP Routing devices have built-in tools to allow you to check the RTT. When checking latency, remember that TCP/IP routing devices (including FCIP routers) report RTT or round-trip-time using standard 64 byte ping packets. In Figure 8-41 you can see why the effective transit time should only be measured using packets large enough to hold a fibre channel frame. This is 2148 bytes (2112 bytes of payload and 36 bytes of header) and you should allow some overhead to be safe, as different switching vendors have optional features which may increase this size. Once you have verified your latency using the proper packet size, then proceed with normal hardware troubleshooting. Before we proceed lets take a quick look at the second largest component of your round-trip-time, serialization delay. Serialization delay is simply the amount of time required to move a packet of data of a specific size across a network link of a given bandwidth. This is based on a pretty simple concept--the time required to move a specific amount of data decreases as the data transmission rate increases. Look again at Figure 8-41 and notice the orders of magnitude of different between the different link bandwidths and it is easy to see how 1920 errors can arise, when your bandwidth is insufficient and why you should never use a TCP/IP ping to measure round-trip-time for FCIP traffic.
Figure 8-41 The effect of packet size (in bytes) versus the link size
Figure 8-41 compares the amount of time in microseconds required to transmit a packet across network links of varying bandwidth capacity. Three packet sizes are used: 1. 64 bytes: The size of the common ping packet.
464
2. 1500 bytes: The size of the standard TCP/IP packet. 3. 2148 bytes: The size of a fibre channel frame. Finally, remember that your path MTU affects the delay incurred in getting a packet from one location to another, when it causes fragmentation or is too large and causes too many retransmits when a packet is lost.
465
466
Chapter 9.
467
Help: You can also use -h instead of -?, for example, the svcinfo -h or svctask commandname -h command. If you look at the syntax of the command by typing svcinfo command name -?, you often see -filter listed as a parameter. Be aware that the correct parameter is -filtervalue. Tip: You can use the up and down arrow keys on your keyboard to recall commands that were recently issued. Then, you can use the left and right, Backspace, and Delete keys to edit commands before you resubmit them.
Using shortcuts
You can use this command to display a list of display or execution commands. This command produces an alphabetical list of actions that are supported. The command parameter must be svcinfo for display commands or svctask for execution commands. The model parameter allows for different shortcuts on different platforms: 2145 or 2076. <command> Shortcuts <model> See Example 9-1 on page 469(some lines have been removed from command output for brevity).
468
IBM_2145:ITSO_SVC1:admin>svctask shortcuts 2145 addcontrolenclosure addhostiogrp addhostport addmdisk addnode addvdiskcopy applydrivesoftware applysoftware cancellivedump cfgportip chhost chiogrp chldap chldapserver chlicense chmdisk chmdiskgrp chnode chnodehw chpartnership chquorum chrcconsistgrp mkemailserver mkemailuser mkfcconsistgrp mkfcmap mkhost mkldapserver mkmdiskgrp mkpartnership mkrcconsistgrp mkrcrelationship mksnmpserver mksyslogserver mkuser mkusergrp mkvdisk mkvdiskhostmap prmmdisk rmmdiskgrp rmnode rmpartnership rmportip rmrcconsistgrp triggerlivedump writesernum
Using reverse-i-search
If you work on your SVC with the same PuTTy session for many hours and enter many commands, then scrolling back to find your previous or similar commands can be a time-intensive task. In this case, using the reverse-i-search command can help you quickly and easily find any command you already issued in the history of your commands by using
Chapter 9. SAN Volume Controller operations using the command-line interface
469
the Ctrl+r keys. Ctrl+r will allow you to interactively search through the command history as you type in commands. Pressing Ctrl+r at an empty command prompt will give you a prompt as shown in Example 9-2.
Example 9-2 Using reverse-i-search
IBM_2145:ITSO_SVC1:admin>lsiogrp id name node_count vdisk_count 0 io_grp0 2 10 1 io_grp1 2 10 2 io_grp2 0 0 3 io_grp3 0 0 4 recovery_io_grp 0 0 (reverse-i-search)`i': lsiogrp
host_count 8 8 0 0 0
As shown in Example 9-2, we had executed an lsiogrp command. By then pressing Ctrl+r and typing sv, the command we needed was recalled from history.
470
IBM_2145:ITSO_SVC1:admin>chcontroller -name ITSO-DS3500 DS3500 IBM_2145:ITSO_SVC1:admin>lscontroller id controller_name ctrl_s/n vendor_id product_id_low product_id_high 0 ITSO-DS5000 LSI INF-01-0 0 2 ITSO-DS3500 b Ns M LSI INF-01-0 0 IBM_2145:ITSO_SVC1:admin> This command renames the controller named controller0 to DS4500. Choosing a new name: The chcontroller command specifies the new name first. You can use letters A to Z, a to z, numbers 0 to 9, the dash (-), and the underscore (_). The new name can be between one and 63 characters in length. However, the new name cannot start with a number, dash, or the word controller (because this prefix is reserved for SVC assignment only).
IBM_2145:ITSO_SVC1:admin>lsdiscoverystatus id scope IO_group_id IO_group_name status 0 fc_fabric inactive This command displays the state of all discoveries in the clustered system. During discovery, the system updates the drive and MDisk records. You must wait until the discovery has finished and is inactive before you attempt to use the system. This command displays one of the following results: active: There is a discovery operation in progress at the time that the command is issued. inactive: There are no discovery operations in progress at the time that the command is issued.
471
Computer System Interface (SCSI) primitives that are necessary to automatically discover the new MDisks. If new storage has been attached and the clustered system has not detected it, it might be necessary to run this command before the system can detect the new MDisks. Use the detectmdisk command to scan for newly added MDisks (Example 9-6).
Example 9-6 detectmdisk
IBM_2145:ITSO_SVC1:admin>detectmdisk To check whether any newly added MDisks were successfully detected, run the lsmdisk command and look for new unmanaged MDisks. If the disks do not appear, check that the disk is appropriately assigned to the SVC in the disk subsystem, and that the zones are set up properly. Note: If you have assigned a large number of logical unit numbers (LUNs) to your SVC, the discovery process can take time. Check several times, using the lsmdisk command if all of the MDisks that you were expecting are present. When all of the disks allocated to the SVC are seen from the SVC system, the following procedure is a useful way to verify which MDisks are unmanaged and ready to be added to the storage pool. Perform the following steps to display MDisks: 1. Enter the lsmdiskcandidate command, as shown in Example 9-7. This command displays all detected MDisks that are not currently part of a storage poll.
Example 9-7 lsmdiskcandidate command IBM_2145:ITSO_SVC1:admin>lsmdiskcandidate id 0 1 2 . .
Alternatively, you can list all MDisks (managed or unmanaged) by issuing the lsmdisk command, as shown in Example 9-8.
Example 9-8 lsmdisk command IBM_2145:ITSO_SVC1:admin>lsmdisk -filtervalue controller_name=ITSO-DS3500 id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID tier 0 mdisk0 online managed 0 STGPool_DS3500-1 128.0GB 0000000000000000 ITSO-DS3500 60080e50001b0b62000007b04e731e4d00000000000000000000000000000000 generic_hdd 1 mdisk1 online managed 0 STGPool_DS3500-1 128.0GB 0000000000000001 ITSO-DS3500 60080e50001b0b62000007b24e731e6000000000000000000000000000000000 generic_hdd 2 mdisk2 online managed 0 STGPool_DS3500-1 128.0GB 0000000000000002 ITSO-DS3500 60080e50001b09e8000006f44e731bdc00000000000000000000000000000000 generic_hdd 3 mdisk3 online managed 1 STGPool_DS3500-2 128.0GB 0000000000000003 ITSO-DS3500 60080e50001b0b62000007b44e731e8400000000000000000000000000000000 generic_hdd 4 mdisk4 online managed 1 STGPool_DS3500-2 128.0GB 0000000000000004 ITSO-DS3500 60080e50001b09e8000006f64e731bff00000000000000000000000000000000 generic_hdd
472
5 mdisk5 online managed 1 STGPool_DS3500-2 128.0GB 0000000000000005 ITSO-DS3500 60080e50001b0b62000007b64e731ea900000000000000000000000000000000 generic_hdd 6 mdisk6 online unmanaged 10.0GB 0000000000000006 ITSO-DS3500 60080e50001b09e80000085f4e7d60dd00000000000000000000000000000000 generic_hdd
From this output, you can see additional information about each MDisk (such as the current status). For the purpose of our current task, we are only interested in the unmanaged disks, because they are candidates for a storage pool. Tip: The -delim parameter collapses output instead of wrapping text over multiple lines. 2. If not all of the MDisks that you expected are visible, rescan the available FC network by entering the detectmdisk command, as shown in Example 9-9.
Example 9-9 detectmdisk IBM_2145:ITSO_SVC1:admin>detectmdisk
3. If you run the lsmdiskcandidate command again and your MDisk or MDisks are still not visible, check that the LUNs from your subsystem have been properly assigned to the SVC and that appropriate zoning is in place (for example, the SVC can see the disk subsystem). See Chapter 3, Planning and configuration on page 67 for details about setting up your storage area network (SAN) fabric.
473
Example 9-11 Usage of the command lsmdisk (ID) IBM_2145:ITSO_SVC1:admin>lsmdisk 0 id 0 name mdisk0 status online mode managed mdisk_grp_id 0 mdisk_grp_name STGPool_DS3500-1 capacity 128.0GB quorum_index 1 block_size 512 controller_name ITSO-DS3500 ctrl_type 4 ctrl_WWNN 20080080E51B09E8 controller_id 2 path_count 4 max_path_count 4 ctrl_LUN_# 0000000000000000 UID 60080e50001b0b62000007b04e731e4d00000000000000000000000000000000 preferred_WWPN 20580080E51B09E8 active_WWPN 20580080E51B09E8 fast_write_state empty raid_status raid_level redundancy strip_size spare_goal spare_protection_min balanced tier generic_hdd
This command renamed the MDisk named mdisk0 to mdisk_0. The chmdisk command: The chmdisk command specifies the new name first. You can use letters A to Z, a to z, numbers 0 to 9, the dash (-), and the underscore (_). The new name can be between one and 63 characters in length. However, the new name cannot start with a number, dash, or the word mdisk (because this prefix is reserved for SVC assignment only).
and you can undertake preventive maintenance. If not, the hosts that were using virtual disks (VDisks), which used the excluded MDisk, now have I/O errors. By running the lsmdisk command, you can see that mdisk0 is excluded in Example 9-13.
Example 9-13 lsmdisk command: Excluded MDisk IBM_2145:ITSO_SVC1:admin>lsmdisk -delim : id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name:UID:tie r 0:mdisk0:excluded:managed:0:STGPool_DS3500-1:128.0GB:0000000000000000:ITSO-DS3500:60080e500 01b0b62000007b04e731e4d00000000000000000000000000000000:generic_hdd 1:mdisk1:online:managed:0:STGPool_DS3500-1:128.0GB:0000000000000001:ITSO-DS3500:60080e50001 b0b62000007b24e731e6000000000000000000000000000000000:generic_hdd 2:mdisk2:online:managed:0:STGPool_DS3500-1:128.0GB:0000000000000002:ITSO-DS3500:60080e50001 b09e8000006f44e731bdc00000000000000000000000000000000:generic_hdd 3:mdisk3:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000003:ITSO-DS3500:60080e50001 b0b62000007b44e731e8400000000000000000000000000000000:generic_hdd 4:mdisk4:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000004:ITSO-DS3500:60080e50001 b09e8000006f64e731bff00000000000000000000000000000000:generic_hdd 5:mdisk5:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000005:ITSO-DS3500:60080e50001 b0b62000007b64e731ea900000000000000000000000000000000:generic_hdd 6:mdisk6:online:unmanaged:::10.0GB:0000000000000006:ITSO-DS3500:60080e50001b09e80000085f4e7 d60dd00000000000000000000000000000000:generic_hdd
After taking the necessary corrective action to repair the MDisk (for example, replace the failed disk, repair the SAN zones, and so on), we need to include the MDisk again by issuing the includemdisk command (Example 9-14), because the SVC system does not include the MDisk automatically.
Example 9-14 includemdisk IBM_2145:ITSO_SVC1:admin>includemdisk mdisk0
Running the lsmdisk command again shows mdisk0 online again; see Example 9-15.
Example 9-15 lsmdisk command: Verifying that MDisk is included IBM_2145:ITSO_SVC1:admin>lsmdisk -delim : id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name:UID:tie r 0:mdisk0:online:managed:0:STGPool_DS3500-1:128.0GB:0000000000000000:ITSO-DS3500:60080e50001 b0b62000007b04e731e4d00000000000000000000000000000000:generic_hdd 1:mdisk1:online:managed:0:STGPool_DS3500-1:128.0GB:0000000000000001:ITSO-DS3500:60080e50001 b0b62000007b24e731e6000000000000000000000000000000000:generic_hdd 2:mdisk2:online:managed:0:STGPool_DS3500-1:128.0GB:0000000000000002:ITSO-DS3500:60080e50001 b09e8000006f44e731bdc00000000000000000000000000000000:generic_hdd 3:mdisk3:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000003:ITSO-DS3500:60080e50001 b0b62000007b44e731e8400000000000000000000000000000000:generic_hdd 4:mdisk4:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000004:ITSO-DS3500:60080e50001 b09e8000006f64e731bff00000000000000000000000000000000:generic_hdd 5:mdisk5:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000005:ITSO-DS3500:60080e50001 b0b62000007b64e731ea900000000000000000000000000000000:generic_hdd 6:mdisk6:online:unmanaged:::10.0GB:0000000000000006:ITSO-DS3500:60080e50001b09e80000085f4e7 d60dd00000000000000000000000000000000:generic_hdd
475
You can only add unmanaged MDisks to a storage pool. This command adds the MDisk named mdisk6 to the storage pool named STGPool_Multi_Tier. Important: Do not add this MDisk to a storage pool if you want to create an image mode volume from the MDisk that you are adding. As soon as you add an MDisk to a storage pool it becomes managed, and extent mapping is not necessarily one-to-one anymore.
As you can see in Example 9-17, with this command, by using a wild card, you will be able to see all the MDisks present in the storage pools named STGPool_* where the asterisk (*) is a wild card.
Example 9-18 mkmdiskgrp IBM_2145:ITSO_SVC1:admin>mkmdiskgrp -name STGPool_Multi_Tier -ext 256 MDisk Group, id [3], successfully created
This command creates a storage pool called STGPool_Multi_Tier. The extent size that is used within this group is 256 MB. We have not added any MDisks to the storage pool yet, so it is an empty storage pool. You can add unmanaged MDisks and create the storage pool in the same command. Use the command mkmdiskgrp with the -mdisk parameter and enter the IDs or names of the MDisks. This will add the MDisks immediately after the storage pool is created. Prior to the creation of the storage pool, enter the lsmdisk command as shown in Example 9-19. This lists all of the available MDisks that are seen by the SVC system.
Example 9-19 Listing available MDisks IBM_2145:ITSO_SVC1:admin>lsmdisk -filtervalue controller_name=ITSO-DS3500 -delim : id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name:UID:tie r 0:mdisk0:online:managed:0:STGPool_DS3500-1:128.0GB:0000000000000000:ITSO-DS3500:60080e50001 b0b62000007b04e731e4d00000000000000000000000000000000:generic_hdd 1:mdisk1:online:managed:0:STGPool_DS3500-1:128.0GB:0000000000000001:ITSO-DS3500:60080e50001 b0b62000007b24e731e6000000000000000000000000000000000:generic_hdd 2:mdisk2:online:managed:0:STGPool_DS3500-1:128.0GB:0000000000000002:ITSO-DS3500:60080e50001 b09e8000006f44e731bdc00000000000000000000000000000000:generic_hdd 3:mdisk3:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000003:ITSO-DS3500:60080e50001 b0b62000007b44e731e8400000000000000000000000000000000:generic_hdd 4:mdisk4:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000004:ITSO-DS3500:60080e50001 b09e8000006f64e731bff00000000000000000000000000000000:generic_hdd 5:mdisk5:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000005:ITSO-DS3500:60080e50001 b0b62000007b64e731ea900000000000000000000000000000000:generic_hdd 6:mdisk6:online:unmanaged:::10.0GB:0000000000000006:DS3500:60080e50001b09e80000085f4e7d60dd 00000000000000000000000000000000:generic_hdd 8:mdisk7:online:unmanaged:::10.0GB:00 00000000000008:DS3500:60080e50001b09e8000008614e7d8a2c00000000000000000000000000000000:gene ric_hdd
Using the same command as before (mkmdiskgrp) and knowing the MDisk IDs that we are using, we can add multiple MDisks to the storage pool at the same time. We now add the unmanaged MDisks to the storage pool that we created, as shown in Example 9-20.
Example 9-20 Creating a storage pool and adding available MDisks IBM_2145:ITSO_SVC1:admin>mkmdiskgrp -name STGPool_DS5000 -ext 256 -mdisk 6:8 MDisk Group, id [2], successfully created
This command creates a storage pool called STGPool_DS5000. The extent size that is used within this group is 256 MB, and two MDisks (6 and 8) are added to the storage pool.
477
Storage pool name: The -name and -mdisk parameters are optional. If you do not enter a -name, the default is MDiskgrpx, where x is the ID sequence number that is assigned by the SVC internally. If you do not enter the -mdisk parameter, an empty storage pool is created. If you want to provide a name, you can use letters A to Z, a to z, numbers 0 to 9, and the underscore. The name can be between one and 63 characters in length, but it cannot start with a number or the word MDiskgrp (because this prefix is reserved for SVC assignment only). By running the lsmdisk command, you now see the MDisks as managed and as part of the STGPool_DS3500-1, as shown in Example 9-21.
Example 9-21 lsmdisk command IBM_2145:ITSO_SVC1:admin>lsmdisk -filtervalue controller_name=ITSO-DS3500 -delim : id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name:UID:tie r 0:mdisk0:online:managed:0:STGPool_DS3500-1:128.0GB:0000000000000000:ITSO-DS3500:60080e50001 b0b62000007b04e731e4d00000000000000000000000000000000:generic_hdd 1:mdisk1:online:managed:0:STGPool_DS3500-1:128.0GB:0000000000000001:ITSO-DS3500:60080e50001 b0b62000007b24e731e6000000000000000000000000000000000:generic_hdd 2:mdisk2:online:managed:0:STGPool_DS3500-1:128.0GB:0000000000000002:ITSO-DS3500:60080e50001 b09e8000006f44e731bdc00000000000000000000000000000000:generic_hdd 3:mdisk3:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000003:ITSO-DS3500:60080e50001 b0b62000007b44e731e8400000000000000000000000000000000:generic_hdd 4:mdisk4:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000004:ITSO-DS3500:60080e50001 b09e8000006f64e731bff00000000000000000000000000000000:generic_hdd 5:mdisk5:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000005:ITSO-DS3500:60080e50001 b0b62000007b64e731ea900000000000000000000000000000000:generic_hdd 6:mdisk6:online:managed:2:STGPool_DS3500-2:10.0GB:0000000000000006:ITSO-DS3500:60080e50001b 09e80000085f4e7d60dd00000000000000000000000000000000:generic_hdd 7:mdisk7:online:managed:3:STGPool_Multi_Tier:10.0GB:0000000000000007:ITSO-DS3500:60080e5000 1b0b620000091f4e7d8c9400000000000000000000000000000000:generic_hdd 8:mdisk8:online:managed:2:STGPool_DS3500-2:10.0GB:0000000000000008:ITSO-DS3500:60080e50001b 09e8000008614e7d8a2c00000000000000000000000000000000:generic_hdd 9:mdisk9:online:managed:3:STGPool_Multi_Tier:10.0GB:0000000000000009:ITSO-DS3500:60080e5000 1b0b62000009214e7d928000000000000000000000000000000000:generic_hdd
At this point, you have completed the tasks that are required to create a new storage pool.
478
IBM_2145:ITSO_SVC1:admin>chmdiskgrp -name STGPool_DS3500-2_new 1 IBM_2145:ITSO_SVC1:admin>lsmdiskgrp -delim : id:name:status:mdisk_count:vdisk_count:capacity:extent_size:free_capacity:virtual_ capacity:used_capacity:real_capacity:overallocation:warning:easy_tier:easy_tier_st atus 0:STGPool_DS3500-1:online:3:11:382.50GB:256:62.50GB:320.00GB:320.00GB:320.00GB:83: 0:auto:inactive 1:STGPool_DS3500-2_new:online:3:11:384.00GB:256:262.00GB:122.00GB:122.00GB:122.00G B:31:0:auto:inactive 2:STGPool_DS5000-1:online:2:0:20.00GB:256:20.00GB:0.00MB:0.00MB:0.00MB:0:0:auto:in active 3:STGPool_Multi_Tier:online:2:0:20.00GB:256:20.00GB:0.00MB:0.00MB:0.00MB:0:0:auto: inactive This command renamed the storage pool STGPool_DS3500-2 to STGPool_DS3500-2_new as shown. Changing the storage pool: The chmdiskgrp command specifies the new name first. You can use letters A to Z, a to z, numbers 0 to 9, the dash (-), and the underscore (_). The new name can be between one and 63 characters in length. However, the new name cannot start with a number, dash, or the word mdiskgrp (because this prefix is reserved for SVC assignment only).
This command removes storage pool STGPool_DS3500-2_new from the SVC system configuration. Removing a storage pool from the SVC system configuration: If there are MDisks within the storage pool, you must use the -force flag to remove the storage pool from the SVC system configuration, for example: rmmdiskgrp STGPool_DS3500-2_new -force Ensure that you definitely want to use this flag, because it destroys all mapping information and data held on the volumes, which cannot be recovered.
479
This command removes the MDisk with ID 8 from the storage pool with ID 2. The -force flag is set because there are volumes using this storage pool. Sufficient space: The removal only takes place if there is sufficient space to migrate the volumes data to other extents on other MDisks that remain in the storage pool. After you remove the MDisk from the storage pool, it takes time to change the mode from managed to unmanaged depending on the size of the MDisk you are removing.
After you know that the WWPNs that are displayed match your host (use host or SAN switch utilities to verify), use the mkhost command to create your host. Name: If you do not provide the -name parameter, the SVC automatically generates the name hostx (where x is the ID sequence number that is assigned by the SVC internally). You can use the letters A to Z and a to z, the numbers 0 to 9, the dash (-), and the underscore (_). The name can be between one and 63 characters in length. However, the name cannot start with a number, dash, or the word host (because this prefix is reserved for SVC assignment only).
480
IBM_2145:ITSO_SVC1:admin>mkhost -name Almaden -hbawwpn 210000E08B89C1CD:210000E08B054CAA Host, id [2], successfully created This command creates a host called Almaden using WWPN 21:00:00:E0:8B:89:C1:CD and 21:00:00:E0:8B:05:4C:AA. Ports: You can define from one up to eight ports per host, or you can use the addport command, which we show in 9.3.5, Adding ports to a defined host on page 484.
IBM_2145:ITSO_SVC1:admin>mkhost -name Almaden -hbawwpn 210000E08B89C1CD:210000E08B054CAA -force Host, id [2], successfully created This command forces the creation of a host called Almaden using WWPN 210000E08B89C1CD:210000E08B054CAA. Note: WWPNs are not case sensitive in the CLI.
481
Before we start, we check our servers IQN address. We are running Windows Server 2008. We select Start Programs Administrative tools, and we select iSCSI initiator. In our example, our IQN, as shown in Figure 9-1, is: iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com
We create the host by issuing the mkhost command, as shown in Example 9-29. When the command completes successfully, we display our newly created host.
Example 9-29 mkhost command
IBM_2145:ITSO_SVC1:admin>mkhost -name Baldur -iogrp 0 -iscsiname iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com Host, id [4], successfully created IBM_2145:ITSO_SVC1:admin>lshost 4 id 4 name Baldur port_count 1 type generic mask 1111 iogrp_count 1 iscsiname iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com node_logged_in_count 0 state offline It is important to know that when the host is initially configured, the default authentication method is set to no authentication and no Challenge Handshake Authentication Protocol (CHAP) secret is set. To set a CHAP secret for authenticating the iSCSI host with the SVC system, use the chhost command with the chapsecret parameter.
482
We have now created our host definition. We map a volume to our new iSCSI server, as shown in Example 9-30. We have already created the volume, as shown in 9.5.1, Creating a volume on page 487. In our scenario, our volume has ID 21 and the host name is Baldur. We map it to our iSCSI host.
Example 9-30 Mapping a volume to the iSCSI host
IBM_2145:ITSO_SVC1:admin>mkvdiskhostmap -host Baldur 21 Virtual Disk to Host map, id [0], successfully created After the volume has been mapped to the host, we display the host information again, as shown in Example 9-31.
Example 9-31 lshost
IBM_2145:ITSO_SVC1:admin>lshost 4 id 4 name Baldur port_count 1 type generic mask 1111 iogrp_count 1 iscsiname iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com node_logged_in_count 1 state online Note: FC hosts and iSCSI hosts are handled in the same way operationally after they have been created. If you need to display a CHAP secret for an already defined server, use the lsiscsiauth command. The lsiscsiauth command lists the Challenge Handshake Authentication Protocol (CHAP) secret configured for authenticating an entity to the SAN Volume Controller system.
IBM_2145:ITSO_SVC1:admin>chhost -name Angola Guinea IBM_2145:ITSO_SVC1:admin>lshost id name 0 Palau 1 Nile 2 Kanaga 3 Siam 4 Angola
port_count 2 2 2 2 1
iogrp_count 4 1 1 2 4
483
Note: The chhost command specifies the new name first. You can use letters A to Z and a to z, numbers 0 to 9, the dash (-), and the underscore (_). The new name can be between one and 63 characters in length. However, it cannot start with a number, dash, or the word host (because this prefix is reserved for SVC assignment only).
Note: If you use Hewlett-Packard UNIX (HP-UX), you use the -type option. See IBM System Storage Open Software Family SAN Volume Controller: Host Attachment Guide, SC26-7563, for more information about the hosts that require the -type parameter.
IBM_2145:ITSO_SVC1:admin>rmhost Angola Deleting a host: If there are any volume assigned to the host, you must use the -force flag, for example: rmhost -force Angola.
IBM_2145:ITSO_SVC1:admin>lshbaportcandidate id 210000E08B054CAA If the WWPN matches your information (use host or SAN switch utilities to verify), use the addhostport command to add the port to the host. Example 9-35 shows the command to add a host port.
Example 9-35 addhostport
IBM_2145:ITSO_SVC1:admin>addhostport -hbawwpn 210000E08B054CAA Palau This command adds the WWPN of 210000E08B054CAA to the Palau host.
484
Adding multiple ports: You can add multiple ports all at one time by using the separator or colon (:) between WWPNs, for example: addhostport -hbawwpn 210000E08B054CAA:210000E08B89C1CD Palau If the new HBA is not connected or zoned, the lshbaportcandidate command does not display your WWPN. In this case, you can manually type the WWPN of your HBA or HBAs and use the -force flag to create the host, as shown in Example 9-36.
Example 9-36 addhostport
IBM_2145:ITSO_SVC1:admin>addhostport -hbawwpn 210000E08B054CAA -force Palau This command forces the addition of the WWPN named 210000E08B054CAA to the host called Palau. WWPNs: WWPNs are not case sensitive within the CLI. If you run the lshost command again, you see your host with an updated port count of 2 in Example 9-37.
Example 9-37 lshost command: Port count
port_count 2 1 3 1 1
iogrp_count 4 4 1 1 1
If your host currently uses iSCSI as a connection method, you must have the new iSCSI IQN ID before you add the port. Unlike FC-attached hosts, you cannot check for available candidates with iSCSI. After you have acquired the additional iSCSI IQN, use the addhostport command, as shown in Example 9-38.
Example 9-38 Adding an iSCSI port to an already configured host
IBM_2145:ITSO_SVC1:admin>lshost Palau id 0
485
name Palau port_count 2 type generic mask 1111 iogrp_count 4 WWPN 210000E08B054CAA node_logged_in_count 2 state active WWPN 210000E08B89C1CD node_logged_in_count 2 state offline When you know the WWPN or iSCSI IQN, use the rmhostport command to delete a host port, as shown in Example 9-40.
Example 9-40 rmhostport
For removing WWPN IBM_2145:ITSO_SVC1:admin>rmhostport -hbawwpn 210000E08B89C1CD Palau and for removing iSCSI IQN IBM_2145:ITSO_SVC1:admin>rmhostport -iscsiname iqn.1991-05.com.microsoft:baldur Baldur This command removes the WWPN of 210000E08B89C1CD from the Palau host and the iSCSI IQN iqn.1991-05.com.microsoft:baldur from the Baldur host. Removing multiple ports: You can remove multiple ports at one time by using the separator or colon (:) between the port names, for example: rmhostport -hbawwpn 210000E08B054CAA:210000E08B892BCD Angola
IBM_2145:ITSO_SVC1:admin>lsportip id node_id node_name IP_address gateway IP_address_6 prefix_6 gateway_6 duplex state speed failover 1 1 node1 00:1a:64:95:2f:cc Full unconfigured 1Gb/s 1 1 node1 00:1a:64:95:2f:cc Full unconfigured 1Gb/s 2 1 node1 10.44.36.64 10.44.36.254 00:1a:64:95:2f:ce Full online 1Gb/s
mask MAC
no yes 255.255.255.0 no
486
2 1 node1 00:1a:64:95:2f:ce Full 1 2 node2 00:1a:64:95:3f:4c Full 1 2 node2 00:1a:64:95:3f:4c Full 2 2 10.44.36.254 00:1a:64:95:3f:4e Full 2 2 node2 00:1a:64:95:3f:4e Full 1 3 node3 00:21:5e:41:53:18 Full 1 3 node3 00:21:5e:41:53:18 Full 2 3 10.44.36.254 00:21:5e:41:53:1a Full 2 3 node3 00:21:5e:41:53:1a Full 1 4 node4 00:21:5e:41:56:8c Full 1 4 node4 00:21:5e:41:56:8c Full 2 4 10.44.36.254 00:21:5e:41:56:8e Full 2 4 node4 00:21:5e:41:56:8e Full
online unconfigured
1Gb/s 1Gb/s
yes no yes 255.255.255.0 no yes no yes 255.255.255.0 no yes no yes 255.255.255.0 no yes
node2
node3
node4
Example 9-42 shows how the cfgportip command assigns an IP address to each node Ethernet port for iSCSI I/O.
Example 9-42 cfgportip command
IBM_2145:ITSO_SVC1:admin>cfgportip -node 4 -ip 10.44.36.63 -gw 10.44.36.254 -mask 255.255.255.0 2 IBM_2145:ITSO_SVC1:admin>cfgportip -node 1 -ip 10.44.36.64 -gw 10.44.36.254 -mask 255.255.255.0 2 IBM_2145:ITSO_SVC1:admin>cfgportip -node 2 -ip 10.44.36.65 -gw 10.44.36.254 -mask 255.255.255.0 2
487
When creating a volume, you must enter several parameters at the CLI. There are both mandatory and optional parameters. See the full command string and detailed information in Command-Line Interface Users Guide, SC27-2287. Creating an image mode disk: If you do not specify the -size parameter when you create an image mode disk, the entire MDisk capacity is used. When you are ready to create a volume, you must know the following information before you start creating the volume: In which storage pool the volume is going to have its extents From which I/O Group the volume will be accessed Which SVC node will be the preferred node for the volume Size of the volume Name of the volume Type of the volume Whether this volume will be managed by Easy Tier to optimize its performance When you are ready to create your striped volume, use the mkvdisk command (we discuss sequential and image mode volume later). In Example 9-43, this command creates a 10 GB striped volume with volume id7 within the storage pool STGPool_DS3500-2 and assigns it to the iogrp_0 I/O Group. Its preferred node will be node 1.
Example 9-43 mkvdisk command
IBM_2145:ITSO_SVC1:admin>mkvdisk -mdiskgrp STGPool_DS3500-2 -iogrp io_grp0 -node 1 -size 10 -unit gb -name Tiger Virtual Disk, id [20], successfully created
To verify the results use the lsvdisk command, as shown in Example 9-44.
Example 9-44 lsvdisk command IBM_2145:ITSO_SVC1:admin>lsvdisk 20 id 20 name Tiger IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 1 mdisk_grp_name STGPool_DS3500-2 capacity 10.00GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 6005076801AF813F1000000000000016 throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite udid
488
fc_map_count 0 sync_rate 50 copy_count 1 se_copy_count 0 filesystem mirror_write_priority latency copy_id 0 status online sync yes primary yes mdisk_grp_id 1 mdisk_grp_name STGPool_DS3500-2 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 10.00GB real_capacity 10.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize se_copy no easy_tier on easy_tier_status inactive tier generic_ssd tier_capacity 0.00MB tier generic_hdd tier_capacity 10.00GB
At this point, you have completed the required tasks to create a volume.
489
name Volume_A IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name Pool_DS3500-1 capacity 10.00GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id 0 RC_name GMREL1 vdisk_UID 6005076801AF813F1000000000000031 throttling 0 preferred_node_id 2 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 se_copy_count 0 filesystem mirror_write_priority latency copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name Pool_DS3500-1 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 10.00GB real_capacity 10.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize se_copy no easy_tier on easy_tier_status inactive tier generic_ssd tier_capacity 0.00MB tier generic_hdd tier_capacity 10.00GB
490
-grainsize
Example 9-46 Usage of the command mkvdisk IBM_2145:ITSO_SVC1:admin>mkvdisk -mdiskgrp STGPool_DS3500-2 -iogrp 0 -vtype striped -size 10 -unit gb -rsize 50% -autoexpand -grainsize 32 Virtual Disk, id [21], successfully created
This command creates a space-efficient 10 GB volume. The volume belongs to the storage pool named STGPool_DS3500-2 and is owned by the io_grp1 I/O Group. The real capacity automatically expands until the volume size of 10 GB is reached. The grain size is set to 32 K, which is the default. Disk size: When using the -rsize parameter, you have the following options: disk_size, disk_size_percentage, and auto. Specify the disk_size_percentage value using an integer, or an integer immediately followed by the percent (%) symbol. Specify the units for a disk_size integer using the -unit parameter; the default is MB. The -rsize value can be greater than, equal to, or less than the size of the volume. The auto option creates a volume copy that uses the entire size of the MDisk. If you specify the -rsize auto option, you must also specify the -vtype image option. An entry of 1 GB uses 1024 MB.
491
Size: An image mode volume must be at least 512 bytes (the capacity cannot be 0). That is, the minimum size that can be specified for an image mode volume must be the same as the storage pool extent size to which it is added, with a minimum of 16 MB. You must use the -mdisk parameter to specify an MDisk that has a mode of unmanaged. The -fmtdisk parameter cannot be used to create an image mode volume. Capacity: If you create a mirrored volume from two image mode MDisks without specifying a -capacity value, the capacity of the resulting volume is the smaller of the two MDisks, and the remaining space on the larger MDisk is inaccessible. If you do not specify the -size parameter when you create an image mode disk, the entire MDisk capacity is used. Use the mkvdisk command to create an image mode volume, as shown in Example 9-47.
Example 9-47 mkvdisk (image mode) IBM_2145:ITSO_SVC1:admin>mkvdisk -mdiskgrp STGPool_DS3500-1 -iogrp 0 -mdisk mdisk10 -vtype image -name Image_Volume_A Virtual Disk, id [22], successfully created
This command creates an image mode volume called Image_Volume_A using the mdisk10 MDisk. The volume belongs to the storage pool STGPool_DS3500-1 and is owned by the io_grp0 I/O Group. If we run the lsvdisk command again, notice that volume named Image_Volume_A has a status of image, as shown in Example 9-48.
Example 9-48 lsvdisk IBM_2145:ITSO_SVC1:admin>lsvdisk -filtervalue type=image id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state se_copy_count RC_change 22 Image_Volume_A 0 io_grp0 online 0 STGPool_DS3500-1 10.00GB image 6005076801AF813F1000000000000018 0 1 empty 0 no
492
storage pool. The volume is copied to the second storage pool while remaining online during the copy. To create a mirrored copy of a volume, use the addvdiskcopy command. This command adds a copy of the chosen volume to the selected storage pool, which changes a non-mirrored volume into a mirrored volume. In the following scenario, we show creating a mirrored volume from one storage pool to another storage pool. As you can see in Example 9-49, the volume has a copy with copy_id 0.
Example 9-49 lsvdisk IBM_2145:ITSO_SVC1:admin>lsvdisk Volume_no_mirror id 23 name Volume_no_mirror IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name STGPool_DS3500-1 capacity 1.00GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 6005076801AF813F1000000000000019 throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 se_copy_count 0 filesystem mirror_write_priority latency copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name STGPool_DS3500-1 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 1.00GB real_capacity 1.00GB free_capacity 0.00MB overallocation 100 autoexpand warning
493
grainsize se_copy no easy_tier on easy_tier_status inactive tier generic_ssd tier_capacity 0.00MB tier generic_hdd tier_capacity 1.00GB
In Example 9-50, we add the volume copy mirror by using the addvdiskcopy command.
Example 9-50 addvdiskcopy IBM_2145:ITSO_SVC1:admin>addvdiskcopy -mdiskgrp STGPool_DS5000-1 -vtype striped -unit gb Volume_no_mirror Vdisk [23] copy [1] successfully created
During the synchronization process, you can see the status by using the lsvdisksyncprogress command. As shown in Example 9-51, the first time that the status is checked, the synchronization progress is at 48%, and the estimated completion time is 11:09:26. The second time that the command is run, the progress status is at 100%, and the synchronization is complete.
Example 9-51 Synchronization IBM_2145:ITSO_SVC1:admin>lsvdisksyncprogress vdisk_id vdisk_name copy_id progress estimated_completion_time 23 Volume_no_mirror 1 48 110926203918 IBM_2145:ITSO_SVC1:admin>lsvdisksyncprogress vdisk_id vdisk_name copy_id progress estimated_completion_time 23 Volume_no_mirror 1 100
As you can see in Example 9-52, the new mirrored volume copy (copy_id 1) has been added and can be seen by using the lsvdisk command.
Example 9-52 lsvdisk IBM_2145:ITSO_SVC1:admin>lsvdisk 23 id 23 name Volume_no_mirror IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id many mdisk_grp_name many capacity 1.00GB type many formatted no mdisk_id many mdisk_name many FC_id FC_name RC_id RC_name vdisk_UID 6005076801AF813F1000000000000019 throttling 0 preferred_node_id 1
494
fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 2 se_copy_count 0 filesystem mirror_write_priority latency copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name STGPool_DS3500-1 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 1.00GB real_capacity 1.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize se_copy no easy_tier on easy_tier_status inactive tier generic_ssd tier_capacity 0.00MB tier generic_hdd tier_capacity 1.00GB copy_id 1 status online sync yes primary no mdisk_grp_id 2 mdisk_grp_name STGPool_DS5000-1 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 1.00GB real_capacity 1.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize se_copy no easy_tier on easy_tier_status inactive tier generic_ssd tier_capacity 0.00MB tier generic_hdd
495
tier_capacity 1.00GB
While adding a volume copy mirror, you can define a mirror with different parameters to the volume copy. Therefore, you can define a thin-provisioned volume copy for a non-volume copy volume and vice versa, which is one way to migrate a non-thin-provisioned volume to a thin-provisioned volume. Note: To change the parameters of a volume copy mirror, you must delete the volume copy and redefine it with the new values. Now we can change the name of the volume just mirrored from Volume_no_mirror to Volume_mirrored, as shown in Example 9-53.
Example 9-53 Volume name changing
As you can see in Example 9-55, the new volume named Volume_new, has been created as an independent volume.
Example 9-55 lsvdisk IBM_2145:ITSO_SVC1:admin>lsvdisk Volume_new id 24 name Volume_new IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 2 mdisk_grp_name STGPool_DS5000-1 capacity 1.00GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id
496
RC_name vdisk_UID 6005076801AF813F100000000000001A throttling 0 preferred_node_id 2 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 se_copy_count 0 filesystem mirror_write_priority latency copy_id 0 status online sync yes primary yes mdisk_grp_id 2 mdisk_grp_name STGPool_DS5000-1 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 1.00GB real_capacity 1.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize se_copy no easy_tier on easy_tier_status inactive tier generic_ssd tier_capacity 0.00MB tier generic_hdd tier_capacity 1.00GB
By issuing the command in Example 9-54 on page 496,Volume_mirrored will no longer have its mirrored copy and a new volume will be created automatically.
497
Tips: If the volume has a mapping to any hosts, it is not possible to move the volume to an I/O Group that does not include any of those hosts. This operation will fail if there is not enough space to allocate bitmaps for a mirrored volume in the target I/O Group. If the -force parameter is used and the system is unable to destage all write data from the cache, the contents of the volume are corrupted by the loss of the cached data. If the -force parameter is used to move a volume that has out-of-sync copies, a full resynchronization is required.
IBM_2145:ITSO_SVC1:admin>chvdisk -rate 20 -unitmb volume_7 IBM_2145:ITSO_SVC1:admin>chvdisk -warning 85% volume_7 New name: The chvdisk command specifies the new name first. The name can consist of letters A to Z and a to z, numbers 0 to 9, the dash (-), and the underscore (_). It can be between one and 63 characters in length. However, it cannot start with a number, the dash, or the word vdisk (because this prefix is reserved for SVC assignment only). The first command changes the volume throttling of volume_7 to 20 MBps. The second command changes the thin-provisioned volume warning to 85%. To verify the changes, issue the lsvdisk command as shown in Example 9-57.
Example 9-57 lsvdisk command: Verifying throttling IBM_2145:ITSO_SVC1:admin>lsvdisk volume_7 id 1 name volume_7
498
IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name STGPool_DS3500-1 capacity 10.00GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 6005076801AF813F100000000000001F virtual_disk_throttling (MB) 20 preferred_node_id 2 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 se_copy_count 1 filesystem mirror_write_priority latency copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name STGPool_DS3500-1 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 2.02GB free_capacity 2.02GB overallocation 496 autoexpand on warning 85 grainsize 32 se_copy yes easy_tier on easy_tier_status inactive tier generic_ssd tier_capacity 0.00MB tier generic_hdd tier_capacity 2.02GB
499
IBM_2145:ITSO_SVC1:admin>rmvdisk volume_A This command deletes the volume_A volume from the SVC configuration. If the volume is assigned to a host, you need to use the -force flag to delete the volume (Example 9-59).
Example 9-59 rmvdisk (-force)
IBM_2145:ITSO_SVC1:admin>expandvdisksize -size 5 -unit gb volume_C This command expands the volume_C volume, which was 35GB before, by another 5 GB to give it a total size of 40GB. To expand a thin-provisioned volume, you can use the -rsize option, as shown in Example 9-61 on page 501. This command changes the real size of the volume_B volume to a real capacity of 55 GB. The capacity of the volume remains unchanged.
500
. copy_id 0
status online used_capacity 0.41MB real_capacity 50.02GB free_capacity 50.02GB overallocation 199 autoexpand on warning 80 grainsize 32 se_copy yes IBM_2145:ITSO_SVC1:admin>expandvdisksize -rsize 5 -unit gb volume_B IBM_2145:ITSO_SVC1:admin>lsvdisk volume_B id 26 name volume_B capacity 100.00GB type striped
. .
copy_id 0 status online used_capacity 0.41MB real_capacity 55.02GB free_capacity 55.02GB overallocation 181 autoexpand on warning 80 grainsize 32 se_copy yes
Important: If a volume is expanded, its type will become striped even if it was previously sequential or in image mode. If there are not enough extents to expand your volume to the specified size, you receive the following error message: CMMVC5860E Ic_failed_vg_insufficient_virtual_extents
501
do not specify a SCSI LUN ID, the system automatically assigns the next available SCSI LUN ID, given any mappings that already exist with that host. Using the volume and host definition that we created in the previous sections, we assign volumes to hosts that are ready for their use. We use the mkvdiskhostmap command (see Example 9-62).
Example 9-62 mkvdiskhostmap IBM_2145:ITSO_SVC1:admin>mkvdiskhostmap -host Almaden volume_B Virtual Disk to Host map, id [0], successfully created IBM_2145:ITSO_SVC1:admin>mkvdiskhostmap -host Almaden volume_C Virtual Disk to Host map, id [1], successfully created
This command display volume_B and volume_C assigned to host Almaden as shown in Example 9-63.
Example 9-63 lshostvdiskmap -delim, command IBM_2145:ITSO_SVC1:admin>lshostvdiskmap -delim : id:name:SCSI_id:vdisk_id:vdisk_name:vdisk_UID 2:Almaden:0:26:volume_B:6005076801AF813F1000000000000020 2:Almaden:1:27:volume_C:6005076801AF813F1000000000000021
Assigning a specific LUN ID to a volume: The optional -scsi scsi_num parameter can help assign a specific LUN ID to a volume that is to be associated with a given host. The default (if nothing is specified) is to increment based on what is already assigned to the host. Be aware that certain HBA device drivers stop when they find a gap in the SCSI LUN IDs. For example: Volume 1 is mapped to Host 1 with SCSI LUN ID 1. Volume 2 is mapped to Host 1 with SCSI LUN ID 2. Volume 3 is mapped to Host 1 with SCSI LUN ID 4. When the device driver scans the HBA, it might stop after discovering Volumes 1 and 2, because there is no SCSI LUN mapped with ID 3. Important: Ensure that the SCSI LUN ID allocation is contiguous. It is not possible to map a volume to a host more than one time at separate LUNs (Example 9-64).
Example 9-64 mkvdiskhostmap
IBM_2145:ITSO_SVC1:admin>mkvdiskhostmap -host Siam volume_A Virtual Disk to Host map, id [0], successfully created This command maps the volume called volume_A to the host called Siam. At this point, you have completed all tasks that are required to assign a volume to an attached host.
502
IBM_2145:ITSO_SVC1:admin>lshostvdiskmap -delim , Siam id,name,SCSI_id,vdisk_id,vdisk_name,wwpn,vdisk_UID 3,Siam,0,0,volume_A,210000E08B18FF8A,60050768018301BF280000000000000C From this command, you can see that the host Siam has only one assigned volume called volume_A. The SCSI LUN ID is also shown, which is the ID by which the volume is presented to the host. If no host is specified, all defined host to volume mappings will be returned. Specifying the flag before the host name: Although the -delim flag normally comes at the end of the command string, in this case, you must specify this flag before the host name. Otherwise, it returns the following message: CMMVC6070E An invalid or duplicated parameter, unaccompanied argument, or incorrect argument sequence has been detected. Ensure that the input is as per the help.
IBM_2145:ITSO_SVC1:admin>rmvdiskhostmap -host Tiger volume_D This command unmaps the volume called volume_D from the host called Tiger.
503
After you know these details you can issue the migratevdisk command, as shown in Example 9-67.
Example 9-67 migratevdisk
IBM_2145:ITSO_SVC1:admin>migratevdisk -mdiskgrp STGPool_DS5000-1 -vdisk volume_C This command moves volume_C to the storage pool named STGPool_DS5000-1. Tips: If insufficient extents are available within your target storage pool, you receive an error message. Make sure that the source and target MDisk group have the same extent size. The optional threads parameter allows you to assign a priority to the migration process. The default is 4, which is the highest priority setting. However, if you want the process to take a lower priority over other types of I/O, you can specify 3, 2, or 1. You can run the lsmigrate command at any time to see the status of the migration process (Example 9-68).
Example 9-68 lsmigrate command IBM_2145:ITSO_SVC1:admin>lsmigrate migrate_type MDisk_Group_Migration progress 0 migrate_source_vdisk_index 27 migrate_target_mdisk_grp 2 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO_SVC1:admin>lsmigrate migrate_type MDisk_Group_Migration progress 76 migrate_source_vdisk_index 27 migrate_target_mdisk_grp 2 max_thread_count 4 migrate_source_vdisk_copy_id 0
Progress: The progress is given as percent complete. If you receive no more replies, it means that the process has finished.
504
Both of the MDisks involved are reported as being image mode during the migration. If the migration is interrupted by a system recovery or by a cache problem, the migration resumes after the recovery completes. Example 9-69 shows an example of the command.
Example 9-69 migratetoimage IBM_2145:ITSO_SVC1:admin>migratetoimage -vdisk volume_A -mdisk mdisk10 -mdiskgrp STGPool_IMAGE
In this example, you migrate the data from volume_A onto mdisk10, and the MDisk must be put into the STGPool_IMAGE storage pool.
505
Assuming your operating system supports it, you can use the shrinkvdisksize command to decrease the capacity of a given volume. Example 9-70 shows an example of this command.
Example 9-70 shrinkvdisksize IBM_2145:ITSO_SVC1:admin>shrinkvdisksize -size 44 -unit gb volume_D
This command shrinks a volume called volume_D from a total size of 80 GB, by 44 GB, to a new total size of 36 GB.
This command displays a list of all of the volume IDs that correspond to the volume copies that use mdisk8. To correlate the IDs displayed in this output to volume names we can run the lsvdisk command, which we discuss in more detail in 9.5, Working with volumes on page 487.
506
IBM_2145:ITSO_SVC1:admin>lsvdiskmember 0 id 4 5 6 7 If you want to know more about these MDisks you can run the lsmdisk command, as explained in 9.2, Working with managed disks and disk controller systems on page 470 (using the ID displayed in Example 9-73 rather than the name).
9.5.20 Showing from which storage pool a volume has its extents
Use the lsvdisk command as shown in Example 9-74 to show to which storage pool a specific volume belongs.
Example 9-74 lsvdisk command: storage pool name IBM_2145:ITSO_SVC1:admin>lsvdisk Volume_D id 25 name Volume_D IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name STGPool_DS3500-1 capacity 10.00GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 6005076801AF813F100000000000001E throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 se_copy_count 1 filesystem mirror_write_priority latency copy_id 0 status online sync yes primary yes
507
mdisk_grp_id 0 mdisk_grp_name STGPool_DS3500-1 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 2.02GB free_capacity 2.02GB overallocation 496 autoexpand on warning 80 grainsize 32 se_copy yes easy_tier on easy_tier_status inactive tier generic_ssd tier_capacity 0.00MB tier generic_hdd tier_capacity 2.02GB
To learn more about these storage pools you can run the lsmdiskgrp command, as explained in 9.2.10, Working with a storage pool on page 476.
This command shows the host or hosts to which the volume_B volume was mapped. It is normal for you to see duplicate entries, because there are more paths between the clustered system and the host. To be sure that the operating system on the host sees the disk only one time, you must install and configure a multipath software application, such as the IBM Subsystem Driver (SDD). Specifying the -delim flag: Although the optional -delim flag normally comes at the end of the command string, in this case, you must specify this flag before the volume name. Otherwise, the command does not return any data.
508
This command shows which volumes are mapped to the host called Almaden. Specifying the -delim flag: Although the optional -delim flag normally comes at the end of the command string, in this case you must specify this flag before the volume name. Otherwise, the command does not return any data.
DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 60050768018301BF2800000000000005 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 20 0 1 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 2343 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 60050768018301BF2800000000000004 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 2335 0 1 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0 DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 60050768018301BF2800000000000006 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 2331 0 1 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0 State: In Example 9-77 the state of each path is OPEN. Sometimes you will see the state CLOSED. This does not necessarily indicate a problem, because it might be a result of the paths processing stage. 2. Run the lshostvdiskmap command to return a list of all assigned volumes (Example 9-78).
Example 9-78 lshostvdiskmap IBM_2145:ITSO_SVC1:admin>lshostvdiskmap -delim , Almaden id,name,SCSI_id,vdisk_id,vdisk_name,vdisk_UID 2,Almaden,0,26,volume_B,60050768018301BF2800000000000005 2,Almaden,1,27,volume_A,60050768018301BF2800000000000004
509
2,Almaden,2,28,volume_C,60050768018301BF2800000000000006
Look for the disk serial number that matches your datapath query device output. This host was defined in our SVC as Almaden. 3. Run the lsvdiskmember vdiskname command for a list of the MDisk or MDisks that make up the specified volume (Example 9-79).
Example 9-79 lsvdiskmember IBM_2145:ITSO_SVC1:admin>lsvdiskmember volume_E id 0 1 2 3 4 10 11 13 15 16 17
4. Query the MDisks with the lsmdisk mdiskID to find their controller and LUN number information, as shown in Example 9-80. The output displays the controller name and the controller LUN ID to help you (provided you gave your controller a unique name, such as a serial number) to track back to a LUN within the disk subsystem; see Example 9-80.
Example 9-80 lsmdisk command IBM_2145:ITSO_SVC1:admin>lsmdisk 0 id 0 name mdisk0 status online mode managed mdisk_grp_id 0 mdisk_grp_name STGPool_DS3500-1 capacity 128.0GB quorum_index 1 block_size 512 controller_name ITSO-DS3500 ctrl_type 4 ctrl_WWNN 20080080E51B09E8 controller_id 2 path_count 4 max_path_count 4 ctrl_LUN_# 0000000000000000 UID 60080e50001b0b62000007b04e731e4d00000000000000000000000000000000 preferred_WWPN 20580080E51B09E8 active_WWPN 20580080E51B09E8 fast_write_state empty raid_status raid_level redundancy strip_size spare_goal spare_protection_min balanced tier generic_hdd
510
Perform logging
511
512
The private key for authentication (for example, icat.ppk). This key is the private key that you have already created. This parameter is set under the Connection Session Auth category as shown in Figure 9-4.
The IP address of the SVC clustered system. This parameter is set under the Session category as shown in Figure 9-5 on page 514.
513
A session name. Our example uses ITSO_SVC1. Our PuTTy version is 0.60. To use this predefined PuTTY session, use the following syntax: plink ITSO_SVC1 If a predefined PuTTY session is not used, use this syntax: plink admin@<your cluster ip add> -i "C:\DirectoryPath\KeyName.PPK" IBM provides a suite of scripting tools that is based on Perl. You can download these scripting tools from this website: http://www.alphaworks.ibm.com/tech/svctools
514
Help: You can also use -h instead of -?, for example, svcinfo -h or svctask commandname -h. If you look at the syntax of the command by typing svcinfo command name -?, you often see -filter listed as a parameter. Be aware that the correct parameter is -filtervalue. Tip: You can use the up and down arrow keys on your keyboard to recall commands that were recently issued. Then, you can use the left and right, Backspace, and Delete keys to edit commands before you resubmit them.
Filtering
To reduce the output that is displayed by a command, you can specify a number of filters, depending on which command you are running. To see which filters are available, type the command followed by the -filtervalue? flag, as shown in Example 9-81.
Example 9-81 lsvdisk -filtervalue? command IBM_2145:ITSO_SVC1:admin>lsvdisk -filtervalue?
515
Filters for this view are : name id IO_group_id IO_group_name status mdisk_grp_name mdisk_grp_id capacity type FC_id FC_name RC_id RC_name vdisk_name vdisk_id vdisk_UID fc_map_count copy_count fast_write_state se_copy_count filesystem preferred_node_id mirror_write_priority RC_flash
When you know the filters, you can be more selective in generating output: Multiple filters can be combined to create specific searches. You can use an asterisk (*) as a wildcard when using names. When capacity is used, the units must also be specified using -u b | kb | mb | gb | tb | pb. For example, if we issue the lsvdisk command with no filters but with the -delim parameter, we see the output that is shown in Example 9-82.
Example 9-82 lsvdisk command: No filters IBM_2145:ITSO_SVC1:admin>lsvdisk -delim , id,name,IO_group_id,IO_group_name,status,mdisk_grp_id,mdisk_grp_name,capacity,type,FC_id,FC _name,RC_id,RC_name,vdisk_UID,fc_map_count,copy_count,fast_write_state,se_copy_count,RC_cha nge 0,ESXI_SRV1_VOL01,1,io_grp1,online,many,many,100.00GB,many,,,,,6005076801AF813F100000000000 0014,0,2,empty,0,no 1,volume_7,0,io_grp0,online,0,STGPool_DS3500-1,10.00GB,striped,,,,,6005076801AF813F10000000 0000001F,0,1,empty,1,no 2,W2K3_SRV1_VOL02,1,io_grp1,online,0,STGPool_DS3500-1,10.00GB,striped,,,,,6005076801AF813F1 000000000000003,0,1,empty,0,no 3,W2K3_SRV1_VOL03,1,io_grp1,online,0,STGPool_DS3500-1,10.00GB,striped,,,,,6005076801AF813F1 000000000000004,0,1,empty,0,no 4,W2K3_SRV1_VOL04,1,io_grp1,online,0,STGPool_DS3500-1,10.00GB,striped,,,,,6005076801AF813F1 000000000000005,0,1,empty,0,no 5,W2K3_SRV1_VOL05,1,io_grp1,online,0,STGPool_DS3500-1,10.00GB,striped,,,,,6005076801AF813F1 000000000000006,0,1,empty,0,no 6,W2K3_SRV1_VOL06,1,io_grp1,online,0,STGPool_DS3500-1,10.00GB,striped,,,,,6005076801AF813F1 000000000000007,0,1,empty,0,no 7,W2K3_SRV2_VOL01,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F1 000000000000008,0,1,empty,0,no
516
8,W2K3_SRV2_VOL02,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F1 000000000000009,0,1,empty,0,no
Tip: The -delim parameter truncates the content in the window and separates data fields with colons as opposed to wrapping text over multiple lines. This parameter is normally used in cases where you need to get reports during script execution. If we now add a filter to our lsvdisk command (mdisk_grp_name) we can reduce the output, as shown in Example 9-83.
Example 9-83 lsvdisk command: With a filter IBM_2145:ITSO_SVC1:admin>lsvdisk -filtervalue mdisk_grp_name=STGPool_DS3500-2 id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state se_copy_count RC_change 7,W2K3_SRV2_VOL01,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F1 000000000000008,0,1,empty,0,no 8,W2K3_SRV2_VOL02,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F1 000000000000009,0,1,empty,0,no
517
518
iscsi_auth_method none iscsi_chap_secret auth_service_configured no auth_service_enabled no auth_service_url auth_service_user_name auth_service_pwd_set no auth_service_cert_set no auth_service_type tip relationship_bandwidth_limit 25 tier generic_ssd tier_capacity 0.00MB tier_free_capacity 0.00MB tier generic_hdd tier_capacity 786.50GB tier_free_capacity 352.25GB has_nas_key no layer appliance
Use the lssystemstats command to displays the most recent values of all node statistics across all nodes in a clustered system as shown in Example 9-85.
Example 9-85 lssystemstats command IBM_2145:ITSO_SVC1:admin>lssystemstats stat_name stat_current stat_peak cpu_pc 1 1 fc_mb 0 0 fc_io 7091 7314 sas_mb 0 0 sas_io 0 0 iscsi_mb 0 0 iscsi_io 0 0 write_cache_pc 0 0 total_cache_pc 0 0 vdisk_mb 0 0 vdisk_io 0 0 vdisk_ms 0 0 mdisk_mb 0 0 mdisk_io 0 0 mdisk_ms 0 0 drive_mb 0 0 drive_io 0 0 drive_ms 0 0 vdisk_r_mb 0 0 vdisk_r_io 0 0 vdisk_r_ms 0 0 vdisk_w_mb 0 0 vdisk_w_io 0 0 vdisk_w_ms 0 0 mdisk_r_mb 0 0 mdisk_r_io 0 0 mdisk_r_ms 0 0 mdisk_w_mb 0 0 mdisk_w_io 0 0 mdisk_w_ms 0 0 drive_r_mb 0 0 drive_r_io 0 0 drive_r_ms 0 0 drive_w_mb 0 0 stat_peak_time 110927162859 110927162859 110927162524 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859 110927162859
519
drive_w_io drive_w_ms
0 0
0 0
110927162859 110927162859
520
There are two ways to perform iSCSI authentication or CHAP, either for the whole clustered system or per host connection. Example 9-87 shows configuring CHAP for the whole clustered system.
Example 9-87 Setting a CHAP secret for the entire clustered system to passw0rd IBM_2145:ITSO_SVC1:admin>chsystem -iscsiauthmethod chap -chapsecret passw0rd
In our scenario we have a clustered system IP of 9.64.210.64, which is not affected during our configuration of the nodes IP addresses. We start by listing our ports using the lsportip command (not shown). We see that we have two ports per node with which to work. Both ports can have two IP addresses that can be used for iSCSI. We configure the secondary port in both nodes in our I/O Group as shown in Example 9-88.
Example 9-88 Configuring secondary Ethernet port on SVC nodes IBM_2145:ITSO_SVC1:admin>cfgportip -node 1 -ip 9.8.7.1 -gw 9.0.0.1 -mask 255.255.255.0 2 IBM_2145:ITSO_SVC1:admin>cfgportip -node 2 -ip 9.8.7.3 -gw 9.0.0.1 -mask 255.255.255.0 2
While both nodes are online, each node will be available to iSCSI hosts on the IP address that we have configured. Note that iSCSI failover between nodes is enabled automatically. Therefore, if a node goes offline for any reason, its partner node in the I/O group will become available on the failed nodes port IP address. This ensures that hosts will continue to be able to perform I/O. The lsportip command will display which port IP addresses are currently active on each node.
521
000002006AC03A42 ITSO_SVC2 remote 1 10.18.228.82 255.255.255.0 10.18.228.1 000002006AC03A42 ITSO_SVC2 remote 2 0000020060A06FB8 ITSO_SVC3 remote 1 10.18.228.83 255.255.255.0 10.18.228.83 fdee:beeb:beeb:0000:0000:0000:0000:0083 48 fdee:beeb:beeb:0000:0000:0000:0000:0083 0000020060A06FB8 ITSO_SVC3 remote 2
Modify the IP address by issuing the chsystemip command. You can either specify a static IP address or have the system assign a dynamic IP address, as shown in Example 9-90.
Example 9-90 chsystemip -systemip
IBM_2145:ITSO_SVC1:admin>chsystemip -systemip 10.20.133.5 -gw 10.20.135.1 -mask 255.255.255.0 -port 1 This command changes the current IP address of the clustered system to 10.20.133.5. Important: If you specify a new system IP address, then the existing communication with the system through the CLI is broken and the PuTTY application automatically closes. You must relaunch the PuTTY application and point to the new IP address, but your SSH key will still work. List the IP service addresses of the clustered system by issuing the lsserviceip command as shown in Example 9-89 on page 521.
At this point, we have completed the tasks that are required to change the IP addresses of the clustered system.
522
Tip: If you have changed the time zone, you must clear the event log dump directory before you can view the event log through the web application.
2. To find the time zone code that is associated with your time zone, enter the lstimezones command, as shown in Example 9-92. A truncated list is provided for this example. If this setting is correct (for example, 522 UTC), go to Step 4. If not, continue with Step 3.
Example 9-92 lstimezones command IBM_2145:ITSO_SVC1:admin>lstimezones id timezone . . 507 Turkey 508 UCT 509 Universal 510 US/Alaska 511 US/Aleutian 512 US/Arizona 513 US/Central 514 US/Eastern 515 US/East-Indiana 516 US/Hawaii 517 US/Indiana-Starke 518 US/Michigan 519 US/Mountain 520 US/Pacific 521 US/Samoa 522 UTC . .
3. Now that you know which time zone code is correct for you, set the time zone by issuing the settimezone (Example 9-93) command.
Example 9-93 settimezone command IBM_2145:ITSO_SVC1:admin>settimezone -timezone 520
4. Set the system time by issuing the setclustertime command (Example 9-94).
Example 9-94 setclustertime command IBM_2145:ITSO_SVC1:admin>setclustertime -time 061718402008
The format of the time is MMDDHHmmYYYY. You have completed the necessary tasks to set the clustered system time zone and time.
Chapter 9. SAN Volume Controller operations using the command-line interface
523
The interval that we specify (minimum 1, maximum 60) is in minutes. This command starts statistics collection and gathers data at 15-minute intervals. Statistics collection: To verify that statistics collection is set, display the system properties again, as shown in Example 9-96.
Example 9-96 Statistics collection status and frequency
IBM_2145:ITSO_SVC1:admin>lssystem statistics_status on statistics_frequency 15 -- Note that the output has been shortened for easier reading. -Note: Starting with SVC 6.3 the command svctask stopstats has been removed, you cannot disable statistics collection. At this point, we have completed the required tasks to start statistics collection on the clustered system.
nodes to be destaged in the event of a subsequent unexpected power loss. Recharging the uninterruptible power supply can take as long as two hours. Shutting down the clustered system prior to removing input power to the uninterruptible power supply units prevents the battery power from being drained. It also makes it possible for I/O activity to be resumed as soon as input power is restored. You can use the following procedure to shut down the system: 1. Use the stopsystem command to shut down your SVC system (Example 9-98).
Example 9-98 stopsystem command IBM_2145:ITSO_SVC1:admin>stopsystem Are you sure that you want to continue with the shut down?
This command shuts down the SVC clustered system. All data is flushed to disk before the power is removed. At this point, you lose administrative contact with your system, and the PuTTY application automatically closes. 2. You will be presented with the following message: Warning: Are you sure that you want to continue with the shut down? Ensure that you have stopped all FlashCopy mappings, Metro Mirror (Remote Copy) relationships, data migration operations, and forced deletions before continuing. Entering y to this message will execute the command. No feedback is then displayed. Entering anything other than y(es) or Y(ES) will result in the command not executing. No feedback is displayed. Important: Before shutting down a clustered system, ensure that all I/O operations are stopped that are destined for this system, because you will lose all access to all volumes being provided by this system. Failure to do so can result in failed I/O operations being reported to the host operating systems. Begin the process of quiescing all I/O to the system by stopping the applications on the hosts that are using the volumes provided by the clustered system.
3. We have completed the tasks that are required to shut down the system. To shut down the uninterruptible power supply units, press the power on button on the front panel of each uninterruptible power supply unit. Restarting the system: To restart the clustered system, you must first restart the uninterruptible power supply units by pressing the power button on their front panels. Then press the power on button on the service panel of one of the nodes within the system. After the node is fully booted up (for example, displaying Cluster: on line 1 and the cluster name on line 2 of the panel), you can start the other nodes in the same way. As soon as all of the nodes are fully booted, you can reestablish administrative contact using PuTTY, and your system will be fully operational again.
525
9.9 Nodes
This section details the tasks that can be performed at an individual node level.
526
failover_iscsi_alias panel_name 108283 enclosure_id canister_id enclosure_serial_number service_IP_address 10.18.228.101 service_gateway 10.18.228.1 service_subnet_mask 255.255.255.0 service_IP_address_6 service_gateway_6 service_prefix_6
Tip: The node that you want to add must have a separate uninterruptible power supply unit serial number from the uninterruptible power supply unit on the first node.
Example 9-101 lsnode command IBM_2145:ITSO_SVC1:admin>lsnode -delim , id,name,UPS_serial_number,WWNN,status,IO_group_id,IO_group_name,config_node,UPS_unique_id,h ardware,iscsi_name,iscsi_alias,panel_name,enclosure_id,canister_id,enclosure_serial_number 4,SVC1N3,1000739007,50050768010037E5,online,1,io_grp1,no,10000000000037E5,8G4,iqn.1986-03.c om.ibm:2145.itsosvc1.svc1n3,,104643,,,
Now that we know the available nodes, we can use the addnode command to add the node to the SVC clustered system configuration. Example 9-102 shows the command to add a node to the SVC system.
Example 9-102 addnode (wwnodename) command IBM_2145:ITSO_SVC1:admin>addnode -wwnodename 50050768010037E5 -iogrp io_grp1
527
This command adds the candidate node with the wwnodename of 50050768010027E2 to the I/O Group called io_grp0. We used the -wwnodename parameter (50050768010037E5). However, we can also use the -panelname parameter (104643) instead, as shown in Example 9-103. If standing in front of the node, it is easier to read the panel name than it is to get the WWNN.
Example 9-103 addnode (panelname) command IBM_2145:ITSO_SVC1:admin>addnode -panelname 104643 -name SVC1N3 -iogrp io_grp1
We also used the optional -name parameter (SVC1N3). If you do not provide the -name parameter, the SVC automatically generates the name nodex (where x is the ID sequence number that is assigned internally by the SVC). Name: If you want to provide a name, you can use letters A to Z and a to z, numbers 0 to 9, the dash (-), and the underscore (_). The name can be between one and 63 characters in length. However, the name cannot start with a number, dash, or the word node (because this prefix is reserved for SVC assignment only). If the addnode command returns no information, your second node is powered on, and the zones are correctly defined, then preexisting system configuration data can be stored in the node. If you are sure that this node is not part of another active SVC system, you can use the service panel to delete the existing system information. After this action is complete, reissue the lsnodecandidate command and you will see it listed.
This command renames node ID 4 to ITSO_SVC1_SVC1N3 4. Name: The chnode command specifies the new name first. You can use letters A to Z and a to z, numbers 0 to 9, the dash (-), and the underscore (_). The name can be between one and 63 characters in length. However, the name cannot start with a number, dash, or the word node (because this prefix is reserved for SVC assignment only).
528
Because SVC1N2 was also the configuration node, the SVC transfers the configuration node responsibilities to a surviving node, within the I/O Group. Unfortunately, the PuTTY session cannot be dynamically passed to the surviving node. Therefore, the PuTTY application loses communication and closes automatically. We must restart the PuTTY application to establish a secure session with the new configuration node. Important: If this node is the last node in an I/O Group, and there are volumes still assigned to the I/O Group, the node is not deleted from the clustered system. If this node is the last node in the system, and the I/O Group has no volumes remaining, the clustered system is destroyed and all virtualization information is lost. Any data that is still required must be backed up or migrated prior to destroying the system.
IBM_2145:ITSO_SVC1:admin>stopcluster -node SVC1N3 Are you sure that you want to continue with the shut down? This command shuts down node SVC1N3 in a graceful manner. When this node has been shut down, the other node in the I/O Group will destage the contents of its cache and will go into write-through mode until the node is powered up and rejoins the clustered system. Important: There is no need to stop FlashCopy mappings, Remote Copy relationships, and data migration operations. The other node will handle these activities, but be aware that the system has a single point of failure now. If this is the last node in an I/O Group, all access to the volumes in the I/O Group will be lost. Verify that you want to shut down this node before executing this command. You must specify the -force flag. By reissuing the lsnode command (Example 9-107), we can see that the node is now offline.
Example 9-107 lsnode command IBM_2145:ITSO_SVC1:admin>lsnode -delim , id,name,UPS_serial_number,WWNN,status,IO_group_id,IO_group_name,config_node,UPS_unique_id,h ardware,iscsi_name,iscsi_alias,panel_name,enclosure_id,canister_id,enclosure_serial_number 1,SVC1N1,1000739004,50050768010027E2,online,0,io_grp0,no,10000000000027E2,8G4,iqn.1986-03.c om.ibm:2145.itsosvc1.svc1n1,,108283,,, 2,SVC1N2,1000739005,5005076801005034,online,0,io_grp0,yes,1000000000005034,8G4,iqn.1986-03. com.ibm:2145.itsosvc1.svc1n2,,110711,,, 3,SVC1N4,1000739006,500507680100505C,online,1,io_grp1,no,20400001C3240004,8G4,iqn.1986-03.c om.ibm:2145.itsosvc1.svc1n4,,110775,,, 4,SVC1N3,1000739007,50050768010037E5,offline,1,io_grp1,no,10000000000037E5,8G4,iqn.1986-03. com.ibm:2145.itsosvc1.svc1n3,,104643,,,
529
IBM_2145:ITSO_SVC1:admin>lsnode SVC1N3 CMMVC5782E The object specified is offline. Restart: To restart the node manually, press the power on button from the service panel of the node. At this point we have completed the tasks that are required to view, add, delete, rename, and shut down a node within an SVC environment.
530
IBM_2145:ITSO_SVC1:admin>lsiogrp id name node_count vdisk_count host_count 0 io_grp0 2 24 9 1 io_grp1 2 22 9 2 io_grp2 0 0 1 3 io_grp3 0 0 1 4 recovery_io_grp 0 0 0 As shown, the SVC predefines five I/O Groups. In a four-node clustered system (similar to our example), only two I/O Groups are actually in use. The other I/O Groups (io_grp2 and io_grp3) are for a six- or eight-node clustered system. The recovery I/O Group is a temporary home for volumes when all nodes in the I/O Group that normally owns them have suffered multiple failures. This design allows us to move the volumes to the recovery I/O Group and then into a working I/O Group. Note that while temporarily assigned to the recovery I/O Group, I/O access is not possible.
IBM_2145:ITSO_SVC1:admin>chiogrp -name io_grpA io_grp1 This command renames the I/O Group io_grp1 to io_grpA. Name: The chiogrp command specifies the new name first. If you want to provide a name, you can use letters A to Z, letters a to z, numbers 0 to 9, the dash (-), and the underscore (_). The name can be between one and 63 characters in length. However, the name cannot start with a number, dash, or the word iogrp (because this prefix is reserved for SVC assignment only). To see whether the renaming was successful, issue the lsiogrp command again to see the change. At this point we have completed the tasks that are required to rename an I/O Group.
531
IBM_2145:ITSO_SVC1:admin>addhostiogrp -iogrp 1 Kanaga Parameters: -iogrp iogrp_list -iogrpall Specify a list of one or more I/O Groups that must be mapped to the host. This parameter is mutually exclusive with the -iogrpall option. The -iogrpall option specifies that all the I/O Groups must be mapped to the specified host. This parameter is mutually exclusive with -iogrp. -host host_id_or_name Identify the host either by ID or name to which the I/O Groups must be mapped. Use the rmhostiogrp command to unmap a specific host to a specific I/O Group, as shown in Example 9-111.
Example 9-111 rmhostiogrp command
IBM_2145:ITSO_SVC1:admin>rmhostiogrp -iogrp 0 Kanaga Parameters: -iogrp iogrp_list -iogrpall Specify a list of one or more I/O Groups that must be unmapped to the host. This parameter is mutually exclusive with the -iogrpall option. The -iogrpall option specifies that all of the I/O Groups must be unmapped to the specified host. This parameter is mutually exclusive with -iogrp. -force If the removal of a host to I/O Group mapping will result in the loss of volume to host mappings, the command fails if the -force flag is not used. The -force flag, however, overrides this behavior and forces the deletion of the host to I/O Group mapping. host_id_or_name Identify the host either by the ID or name to which the I/O Groups must be mapped.
IBM_2145:ITSO_SVC1:admin>lshostiogrp Kanaga id name 1 io_grp1 To list all of the host objects that are mapped to the specified I/O Group, use the lsiogrphost command, as shown in Example 9-113 on page 533.
532
IBM_2145:ITSO_SVC1:admin> lsiogrphost io_grp1 id name 1 Nile 2 Kanaga 3 Siam In Example 9-113, io_grp1 is the I/O Group name.
533
Example 9-115 is a simple example of creating a user. User John is added to the user group Monitor with the password m0nitor.
Example 9-115 mkuser called John with password m0nitor IBM_2145:ITSO_SVC1:admin>mkuser -name John -usergrp Monitor -password m0nitor User, id [6], successfully created
Local users are users that are not authenticated by a remote authentication server. Remote
users are users that are authenticated by a remote central registry server. The user groups already have a defined authority role, as listed in Table 9-2.
Table 9-2 Authority roles User group Security admin Administrator Role All commands All commands except: chauthservice, mkuser, rmuser, chuser, mkusergrp, rmusergrp, chusergrp, and setpwdreset User Superusers Administrators that control the SVC
534
Role All display commands and the following commands: prestartfcconsistgrp, startfcconsistgrp, stopfcconsistgrp, chfcconsistgrp, prestartfcmap, startfcmap, stopfcmap, chfcmap, startrcconsistgrp, stoprcconsistgrp, switchrcconsistgrp, chrcconsistgrp, startrcrelationship, stoprcrelationship, switchrcrelationship, chrcrelationship, and chpartnership All display commands and the following commands: applysoftware, setlocale, addnode, rmnode, cherrstate, writesernum, detectmdisk, includemdisk, clearerrlog, cleardumps, settimezone, stopcluster, startstats, stopstats, and settime All display commands and the following commands: finderr, dumperrlog, dumpinternallog, and chcurrentuser And svcconfig: backup
User For users that control all of the copy functionality of the cluster
Service
For users that perform service maintenance and other hardware tasks on the system
Monitor
535
To view our currently defined users and the user groups to which they belong we use the lsuser command, as shown in Example 9-117.
Example 9-117 lsuser command IBM_2145:ITSO_SVC1:admin>lsuser -delim , id,name,password,ssh_key,remote,usergrp_id,usergrp_name 0,superuser,yes,no,no,0,SecurityAdmin 1,admin,yes,yes,no,0,SecurityAdmin 2,Torben,yes,no,no,0,SecurityAdmin 3,Massimo,yes,no,no,1,Administrator 4,Christian,yes,no,no,1,Administrator 5,Alejandro,yes,no,no,1,Administrator 6,John,yes,no,no,4,Monitor
536
Example 9-118 catauditlog command IBM_2145:ITSO_SVC1:admin>catauditlog -first 5 audit_seq_no timestamp cluster_user ssh_ip_address result res_obj_id action_cmd 459 110928150506 admin 10.18.228.173 0 6 svctask mkuser -name John -usergrp Monitor -password '######' 460 110928160353 admin 10.18.228.173 0 7 svctask mkmdiskgrp -name DS5000-2 -ext 256 461 110928160535 admin 10.18.228.173 0 1 svctask mkhost -name hostone -hbawwpn 210100E08B251DD4 -force -mask 1001 462 110928160755 admin 10.18.228.173 0 1 svctask mkvdisk -iogrp 0 -mdiskgrp 3 -size 10 -unit gb -vtype striped -autoexpand -grainsize 32 -rsize 20% 463 110928160817 admin 10.18.228.173 0 svctask rmvdisk 1
If you need to dump the contents of the in-memory audit log to a file on the current configuration node, use the dumpauditlog command. This command does not provide any feedback; it only provides the prompt. To obtain a list of the audit log dumps, use the lsdumps command as shown in Example 9-119.
Example 9-119 lsdumps command IBM_2145:ITSO_SVC1:admin>lsdumps id filename 0 dump.110711.110914.182844 1 svc.config.cron.bak_108283 2 sel.110711.trc 3 endd.trc 4 rtc.race_mq_log.txt.110711.trc 5 dump.110711.110920.102530 6 ethernet.110711.trc 7 svc.config.cron.bak_110711 8 svc.config.cron.xml_110711 9 svc.config.cron.log_110711 10 svc.config.cron.sh_110711 11 110711.trc
537
Scenario description
We use the following scenario in both the command-line section and the GUI section. In the following scenario, we want to FlashCopy the following volumes: DB_Source Log_Source App_Source Database files Database log files Application files
We create Consistency Groups to handle the FlashCopy of DB_Source and Log_Source, because data integrity must be kept on DB_Source and Log_Source. In our scenario, the application files are independent of the database, so we create a single FlashCopy mapping for App_Source. We will make two FlashCopy targets for DB_Source and Log_Source and therefore, two Consistency Groups. Figure 9-6 shows the scenario.
538
IBM_2145:ITSO_SVC3:admin>mkfcconsistgrp -name FCCG1 FlashCopy Consistency Group, id [1], successfully created IBM_2145:ITSO_SVC3:admin>mkfcconsistgrp -name FCCG2 FlashCopy Consistency Group, id [2], successfully created In Example 9-121, we checked the status of Consistency Groups. Each Consistency Group has a status of empty.
Example 9-121 Checking the status
IBM_2145:ITSO_SVC3:admin>lsfcconsistgrp id name status 1 FCCG1 empty 2 FCCG2 empty If you want to change the name of a Consistency Group, you can use the chfcconsistgrp command. Type chfcconsistgrp -h for help with this command.
539
IBM_2145:ITSO_SVC3:admin>mkfcmap -source DB_Source -target DB_Target1 -name DB_Map1 -consistgrp FCCG1 FlashCopy Mapping, id [0], successfully created IBM_2145:ITSO_SVC3:admin>mkfcmap -source Log_Source -target Log_Target1 -name Log_Map1 -consistgrp FCCG1 FlashCopy Mapping, id [1], successfully created IBM_2145:ITSO_SVC3:admin>mkfcmap -source App_Source -target App_Target1 -name App_Map1 FlashCopy Mapping, id [2], successfully created Example 9-123 on page 541 shows the command to create a second FlashCopy mapping for volume DB_Source and Log_Source.
540
IBM_2145:ITSO_SVC3:admin>mkfcmap -source DB_Source -target DB_Target2 -name DB_Map2 -consistgrp FCCG2 FlashCopy Mapping, id [3], successfully created IBM_2145:ITSO_SVC3:admin>mkfcmap -source Log_Source -target Log_Target2 -name Log_Map2 -consistgrp FCCG2 FlashCopy Mapping, id [4], successfully created Example 9-124 shows the result of these FlashCopy mappings. The status of the mapping is idle_or_copied.
Example 9-124 Check the result of Multiple Target FlashCopy mappings
IBM_2145:ITSO_SVC3:admin>lsfcmap id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id group_name status progress copy_rate clean_progress incremental partner_FC_id partner_FC_name restoring start_time rc_controlled 0 DB_Map1 3 DB_Source 4 DB_Target1 1 FCCG1 idle_or_copied 0 50 100 off no no 1 Log_Map1 6 Log_Source 7 Log_Target1 1 FCCG1 idle_or_copied 0 50 100 off no no 2 App_Map1 9 App_Source 10 App_Target1 idle_or_copied 0 50 100 off no no 3 DB_Map2 3 DB_Source 5 DB_Target2 2 FCCG2 idle_or_copied 0 50 100 off no no 4 Log_Map2 6 Log_Source 8 Log_Target2 2 FCCG2 idle_or_copied 0 50 100 off no no IBM_2145:ITSO_SVC3:admin>lsfcconsistgrp id name status 1 FCCG1 idle_or_copied 2 FCCG2 idle_or_copied If you want to change the FlashCopy mapping, you can use the chfcmap command. Type chfcmap -h to get help with this command.
541
mappings that do not belong to any Consistency Group. A FlashCopy must be prepared before it can be triggered. In our scenario, App_Map1 is not in a Consistency Group. In Example 9-125, we show how to initialize the preparation for App_Map1. Another option is that you add the -prep parameter to the startfcmap command, which first prepares the mapping and then starts the FlashCopy. In the example, we also show how to check the status of the current FlashCopy mapping. App_Map1s status is prepared.
Example 9-125 Prepare a FlashCopy without a Consistency Group
IBM_2145:ITSO_SVC3:admin>prestartfcmap App_Map1 IBM_2145:ITSO_SVC3:admin>lsfcmap App_Map1 id 2 name App_Map1 source_vdisk_id 9 source_vdisk_name App_Source target_vdisk_id 10 target_vdisk_name App_Target1 group_id group_name status prepared progress 0 copy_rate 50 start_time dependent_mappings 0 autodelete off clean_progress 0 clean_rate 50 incremental off difference 0 grain_size 256 IO_group_id 0 IO_group_name io_grp0 partner_FC_id partner_FC_name restoring no rc_controlled no
542
Example 9-126 shows how we prepare the Consistency Groups for DB and Log and check the result. After the command has executed all of the FlashCopy maps that we have, all of them are in the prepared status and all the Consistency Groups are in the prepared status, too. Now we are ready to start the FlashCopy.
Example 9-126 Prepare a FlashCopy Consistency Group
IBM_2145:ITSO_SVC3:admin>prestartfcconsistgrp FCCG1 IBM_2145:ITSO_SVC3:admin>prestartfcconsistgrp FCCG2 IBM_2145:ITSO_SVC3:admin>lsfcconsistgrp FCCG1 id 1 name FCCG1 status prepared autodelete off FC_mapping_id 0 FC_mapping_name DB_Map1 FC_mapping_id 1 FC_mapping_name Log_Map1 IBM_2145:ITSO_SVC3:admin>lsfcconsistgrp id name status 1 FCCG1 prepared 2 FCCG2 prepared
IBM_2145:ITSO_SVC3:admin>startfcmap App_Map1 IBM_2145:ITSO_SVC3:admin>lsfcmap id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id group_name status progress copy_rate clean_progress incremental partner_FC_id partner_FC_name restoring start_time rc_controlled 0 DB_Map1 3 DB_Source 4 DB_Target1 1 FCCG1 prepared 0 50 0 off no no
543
1 Log_Map1 6 Log_Source FCCG1 prepared 0 50 0 no no 2 App_Map1 9 App_Source 10 copying 0 50 100 no 110929113407 no 3 DB_Map2 3 DB_Source FCCG2 prepared 0 50 0 no no 4 Log_Map2 6 Log_Source FCCG2 prepared 0 50 0 no no IBM_2145:ITSO_SVC3:admin>lsfcmap App_Map1 id 2 name App_Map1 source_vdisk_id 9 source_vdisk_name App_Source target_vdisk_id 10 target_vdisk_name App_Target1 group_id group_name status copying progress 0 copy_rate 50 start_time 110929113407 dependent_mappings 0 autodelete off clean_progress 100 clean_rate 50 incremental off difference 0 grain_size 256 IO_group_id 0 IO_group_name io_grp0 partner_FC_id partner_FC_name restoring no rc_controlled no
7 off
Log_Target1
IBM_2145:ITSO_SVC3:admin>startfcconsistgrp FCCG1 IBM_2145:ITSO_SVC3:admin>startfcconsistgrp FCCG2 IBM_2145:ITSO_SVC3:admin>lsfcconsistgrp FCCG1 id 1 name FCCG1 status copying autodelete off
544
FC_mapping_id 0 FC_mapping_name DB_Map1 FC_mapping_id 1 FC_mapping_name Log_Map1 IBM_2145:ITSO_SVC3:admin> IBM_2145:ITSO_SVC3:admin>lsfcconsistgrp id name status 1 FCCG1 copying 2 FCCG2 copying
IBM_2145:ITSO_SVC3:admin>lsfcmapprogress id progress 0 23 IBM_2145:ITSO_SVC3:admin>lsfcmapprogress id progress 1 41 IBM_2145:ITSO_SVC3:admin>lsfcmapprogress id progress 4 4 IBM_2145:ITSO_SVC3:admin>lsfcmapprogress id progress 3 5 IBM_2145:ITSO_SVC3:admin>lsfcmapprogress id progress 2 10
DB_Map1
Log_Map1
Log_Map2
DB_Map2
App_Map1
When the background copy has completed, the FlashCopy mapping enters the idle_or_copied state. When all FlashCopy mappings in a Consistency Group enter this status, the Consistency Group will be at idle_or_copied status. When in this state, the FlashCopy mapping can be deleted and the target disk can be used independently if, for example, another target disk is to be used for the next FlashCopy of the particular source volume.
545
Tip: In a Multiple Target FlashCopy environment, if you want to stop a mapping or group, consider whether you want to keep any of the dependent mappings. If not, issue the stop command with the force parameter, which will stop all of the dependent maps and negate the need for the stopping copy process to run. When a FlashCopy mapping is stopped, the target volume becomes invalid and is set offline by the SVC. The FlashCopy mapping needs to be prepared again or retriggered to bring the target volume online again. Important: Only stop a FlashCopy mapping when the data on the target volume is not in use, or when you want to modify the FlashCopy mapping. When a FlashCopy mapping is stopped, the target volume becomes invalid and is set offline by the SVC, if the mapping is in the Copying state and progress=100. Example 9-130 shows how to stop the App_Map1 FlashCopy. The status of App_Map1 has changed to idle_or_copied.
Example 9-130 Stop APP_Map1 FlashCopy
IBM_2145:ITSO_SVC3:admin>stopfcmap App_Map1 IBM_2145:ITSO_SVC3:admin>lsfcmap App_Map1 id 2 name App_Map1 source_vdisk_id 9 source_vdisk_name App_Source target_vdisk_id 10 target_vdisk_name App_Target1 group_id group_name status idle_or_copied progress 100 copy_rate 50 start_time 110929113407 dependent_mappings 0 autodelete off clean_progress 100 clean_rate 50 incremental off difference 100 grain_size 256 IO_group_id 0 IO_group_name io_grp0 partner_FC_id partner_FC_name restoring no rc_controlled no
are set offline by the SVC. The FlashCopy Consistency Group needs to be prepared again and restarted to bring the target volumes online again. Important: Only stop a FlashCopy mapping when the data on the target volume is not in use, or when you want to modify the FlashCopy Consistency Group. When a Consistency Group is stopped, the target volume might become invalid and set offline by the SVC, depending on the state of the mapping. As shown in Example 9-131, we stop the FCCG1 and FCCG2 Consistency Groups. The status of the two Consistency Groups has changed to stopped. Most of the FlashCopy mapping relations now have the status stopped. As you can see, several of them have already completed the copy operation and are now in a status of idle_or_copied.
Example 9-131 Stop FCCG1 and FCCG2 Consistency Groups
IBM_2145:ITSO_SVC3:admin>stopfcconsistgrp FCCG1 IBM_2145:ITSO_SVC3:admin>stopfcconsistgrp FCCG2 IBM_2145:ITSO_SVC3:admin>lsfcconsistgrp id name status 1 FCCG1 idle_or_copied 2 FCCG2 idle_or_copied IBM_2145:ITSO_SVC3:admin>lsfcmap -delim , id,name,source_vdisk_id,source_vdisk_name,target_vdisk_id,target_vdisk_name,group_ id,group_name,status,progress,copy_rate,clean_progress,incremental,partner_FC_id,p artner_FC_name,restoring,start_time,rc_controlled 0,DB_Map1,3,DB_Source,4,DB_Target1,1,FCCG1,idle_or_copied,100,50,100,off,,,no,1109 29113806,no 1,Log_Map1,6,Log_Source,7,Log_Target1,1,FCCG1,idle_or_copied,100,50,100,off,,,no,1 10929113806,no 2,App_Map1,9,App_Source,10,App_Target1,,,idle_or_copied,100,50,100,off,,,no,110929 113407,no 3,DB_Map2,3,DB_Source,5,DB_Target2,2,FCCG2,idle_or_copied,100,50,100,off,,,no,1109 29113806,no 4,Log_Map2,6,Log_Source,8,Log_Target2,2,FCCG2,idle_or_copied,100,50,100,off,,,no,1 10929113806,no
547
IBM_2145:ITSO_SVC3:admin>rmfcmap App_Map1
IBM_2145:ITSO_SVC3:admin>rmfcmap DB_Map1 IBM_2145:ITSO_SVC3:admin>rmfcmap DB_Map2 IBM_2145:ITSO_SVC3:admin>rmfcmap Log_Map1 IBM_2145:ITSO_SVC3:admin>rmfcmap Log_Map2 IBM_2145:ITSO_SVC3:admin>rmfcconsistgrp FCCG1 IBM_2145:ITSO_SVC3:admin>rmfcconsistgrp FCCG2 IBM_2145:ITSO_SVC3:admin>lsfcconsistgrp IBM_2145:ITSO_SVC3:admin>lsfcmap IBM_2145:ITSO_SVC3:admin>
548
capacity 10.00GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018281BEE00000000000000B throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 se_copy_count 1 filesystem mirror_write_priority latency copy_id 0 status online sync yes primary yes mdisk_grp_id 1 mdisk_grp_name Multi_Tier_Pool type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 221.17MB free_capacity 220.77MB overallocation 4629 autoexpand on warning 80 grainsize 32 se_copy yes easy_tier on easy_tier_status active tier generic_ssd tier_capacity 0.00MB tier generic_hdd tier_capacity 221.17MB
2. Define a FlashCopy mapping in which the non thin-provisioned volume is the source and the thin-provisioned volume is the target. Specify a copy rate as high as possible and activate the -autodelete option for the mapping; see Example 9-135. Example 9-135 mkfcmap
IBM_2145:ITSO_SVC3:admin>mkfcmap -source App_Source -target App_Source_SE -name MigrtoThinProv -copyrate 100 -autodelete FlashCopy Mapping, id [0], successfully created IBM_2145:ITSO_SVC3:admin>lsfcmap 0 id 0 name MigrtoThinProv source_vdisk_id 9
549
source_vdisk_name App_Source target_vdisk_id 11 target_vdisk_name App_Source_SE group_id group_name status idle_or_copied progress 0 copy_rate 100 start_time dependent_mappings 0 autodelete on clean_progress 100 clean_rate 50 incremental off difference 100 grain_size 256 IO_group_id 0 IO_group_name io_grp0 partner_FC_id partner_FC_name restoring no rc_controlled no
3. Run the prestartfcmap command and the lsfcmap MigrtoThinProv command, as shown in Example 9-136. Example 9-136 prestartfcmap
IBM_2145:ITSO_SVC3:admin>prestartfcmap MigrtoThinProv IBM_2145:ITSO_SVC3:admin>lsfcmap MigrtoThinProv id 0 name MigrtoThinProv source_vdisk_id 9 source_vdisk_name App_Source target_vdisk_id 11 target_vdisk_name App_Source_SE group_id group_name status prepared progress 0 copy_rate 100 start_time dependent_mappings 0 autodelete on clean_progress 0 clean_rate 50 incremental off difference 100 grain_size 256 IO_group_id 0 IO_group_name io_grp0 partner_FC_id partner_FC_name restoring no rc_controlled no
550
5. Monitor the copy process using the lsfcmapprogress command, as shown in Example 9-138.
Example 9-138 lsfcmapprogress command IBM_2145:ITSO_SVC3:admin>lsfcmapprogress MigrtoThinProv id progress 0 67
6. The FlashCopy mapping has been deleted automatically, as shown in Example 9-139.
Example 9-139 lsfcmap command IBM_2145:ITSO_SVC3:admin>lsfcmap MigrtoThinProv id 0 name MigrtoThinProv source_vdisk_id 9 source_vdisk_name App_Source target_vdisk_id 11 target_vdisk_name App_Source_SE group_id group_name status copying progress 67 copy_rate 100 start_time 110929135848 dependent_mappings 0 autodelete on clean_progress 100 clean_rate 50 incremental off difference 100 grain_size 256 IO_group_id 0 IO_group_name io_grp0 partner_FC_id partner_FC_name restoring no rc_controlled no IBM_2145:ITSO_SVC3:admin>lsfcmapprogress MigrtoThinProv CMMVC5804E The action failed because an object that was specified in the command does not exist. IBM_2145:ITSO_SVC3:admin>
An independent copy of the source volume (App_Source) has been created. The migration has completed, as shown in Example 9-140.
Example 9-140 lsvdisk App_Source IBM_2145:ITSO_SVC3:admin>lsvdisk App_Source id 9 name App_Source IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 1 mdisk_grp_name Multi_Tier_Pool
551
capacity 10.00GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018281BEE000000000000009 throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 se_copy_count 0 filesystem mirror_write_priority latency copy_id 0 status online sync yes primary yes mdisk_grp_id 1 mdisk_grp_name Multi_Tier_Pool type striped mdisk_id mdisk_name fast_write_state empty used_capacity 10.00GB real_capacity 10.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize se_copy no easy_tier on easy_tier_status active tier generic_ssd tier_capacity 0.00MB tier generic_hdd tier_capacity 10.00GB
Real size: Independently of what you defined as the real size of the target thin-provisioned volume, the real size will be at least the capacity of the source volume. To migrate a thin-provisioned volume to a fully allocated volume, you can follow the same scenario.
552
In Example 9-141, FCMAP_1 is the forward FlashCopy mapping, and FCMAP_rev_1 is a reverse FlashCopy mapping. We have also a cascade FCMAP_2 where its source is FCMAP_1s target volume, and its target is a different volume named Volume_FC_T1. In our example, after creating the environment, we started the FCMAP_1 and later FCMAP_2. As an example we started FCMAP_rev_1 without specifying the -restore parameter to show why we have to use it, and to show the message issued if you do not use it: CMMVC6298E The command failed because a target VDisk has dependent FlashCopy mappings. When starting a reverse FlashCopy mapping, you must use the -restore option to indicate that the user wants to overwrite the data on the source disk of the forward mapping.
Example 9-141 Reverse FlashCopy
IBM_2145:ITSO_SVC3:admin>lsvdisk id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state se_copy_count RC_change 3 Volume_FC_S 0 io_grp0 online 1 Multi_Tier_Pool 10.00GB striped 60050768018281BEE000000000000003 0 1 empty 0 0 no 4 Volume_FC_T_S1 0 io_grp0 online 1 Multi_Tier_Pool 10.00GB striped 60050768018281BEE000000000000004 0 1 empty 0 0 no 5 Volume_FC_T1 0 io_grp0 online 1 Multi_Tier_Pool 10.00GB striped 60050768018281BEE000000000000005 0 1 empty 0 0 no IBM_2145:ITSO_SVC3:admin>mkfcmap -source Volume_FC_S -target Volume_FC_T_S1 -name FCMAP_1 -copyrate 50 FlashCopy Mapping, id [0], successfully created IBM_2145:ITSO_SVC3:admin>mkfcmap -source Volume_FC_T_S1 -target Volume_FC_S -name FCMAP_rev_1 -copyrate 50 FlashCopy Mapping, id [1], successfully created IBM_2145:ITSO_SVC3:admin>mkfcmap -source Volume_FC_T_S1 -target Volume_FC_T1 -name FCMAP_2 -copyrate 50 FlashCopy Mapping, id [2], successfully created IBM_2145:ITSO_SVC3:admin>lsfcmap id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id group_name status progress copy_rate clean_progress incremental partner_FC_id partner_FC_name restoring start_time rc_controlled 0 FCMAP_1 3 Volume_FC_S 4 Volume_FC_T_S1 idle_or_copied 0 50 100 off 1 FCMAP_rev_1 no no 1 FCMAP_rev_1 4 Volume_FC_T_S1 3 Volume_FC_S idle_or_copied 0 50 100 off 0 FCMAP_1 no no 2 FCMAP_2 4 Volume_FC_T_S1 5 Volume_FC_T1 idle_or_copied 0 50 100 off no no
Chapter 9. SAN Volume Controller operations using the command-line interface
553
IBM_2145:ITSO_SVC3:admin>startfcmap -prep FCMAP_1 IBM_2145:ITSO_SVC3:admin>startfcmap -prep FCMAP_2 IBM_2145:ITSO_SVC3:admin>lsfcmap id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id group_name status progress copy_rate clean_progress incremental partner_FC_id partner_FC_name restoring start_time rc_controlled 0 FCMAP_1 3 Volume_FC_S 4 Volume_FC_T_S1 copying 0 50 100 off 1 FCMAP_rev_1 no no 1 FCMAP_rev_1 4 Volume_FC_T_S1 3 Volume_FC_S idle_or_copied 0 50 100 off 0 FCMAP_1 no no 2 FCMAP_2 4 Volume_FC_T_S1 5 Volume_FC_T1 copying 4 50 100 off no 110929143739 no IBM_2145:ITSO_SVC3:admin>startfcmap -prep FCMAP_rev_1 CMMVC6298E The command failed because a target VDisk has dependent FlashCopy mappings. IBM_2145:ITSO_SVC3:admin>startfcmap -prep -restore FCMAP_rev_1 IBM_2145:ITSO_SVC3:admin>lsfcmap id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id group_name status progress copy_rate clean_progress incremental partner_FC_id partner_FC_name restoring start_time rc_controlled 0 FCMAP_1 3 Volume_FC_S 4 Volume_FC_T_S1 copying 43 100 56 off 1 FCMAP_rev_1 no 110929151911 no 1 FCMAP_rev_1 4 Volume_FC_T_S1 3 Volume_FC_S copying 56 100 43 off 0 FCMAP_1 yes 110929152030 no 2 FCMAP_2 4 Volume_FC_T_S1 5 Volume_FC_T1 copying 37 100 100 off no 110929151926 no As you can see in Example 9-141 on page 553, FCMAP_rev_1 shows a restoring value of yes while the FlashCopy mapping is copying. After it has finished copying, the restoring value field will change to no.
554
Without the -split option, volume A remains at the head of the cascade (A C D). Consider this sequence of steps: 1. User takes a backup using the mapping A B. A is the production volume; B is a backup. 2. At a later point, the user experiences corruption on A and so reverses the mapping B A. 3. The user then takes another backup from the production disk A, resulting in the cascade B A C. Stopping A B without the -split option results in the cascade B C. Note that the backup disk B is now at the head of this cascade. When the user next wants to take a backup to B, the user can still start mapping A B (using the -restore flag), but the user cannot then reverse the mapping to A (B A or C A). Stopping A B with the -split option results in the cascade A C. This action does not result in the same problem, because production disk A is at the head of the cascade instead of the backup disk B.
Because data consistency is needed across the MM_DB_Pri and MM_DBLog_Pri volumes, a CG_WIN2K3_MM Consistency Group is created to handle Metro Mirror relationships for them. Because in this scenario application files are independent of the database, a stand-alone Metro Mirror relationship is created for the MM_App_Pri volume. Figure 9-7 on page 556 illustrates the Metro Mirror setup.
555
556
5. Create the Metro Mirror relationship for MM_App_Pri: Master MM_App_Pri Auxiliary MM_App_Sec Auxiliary SVC system ITSO_SVC4 Name MMREL3
Preverification
To verify that both systems can communicate with each other, use the lspartnershipcandidate command. As shown in Example 9-142, ITSO_SVC4 is an eligible SVC system candidate at ITSO_SVC1 for the SVC system partnership, and vice versa. Therefore, both systems are communicating with each other.
Example 9-142 Listing the available SVC systems for partnership
IBM_2145:ITSO_SVC1:admin>lspartnershipcandidate id configured name 0000020061C06FCA no ITSO_SVC4 000002006AC03A42 no ITSO_SVC2 0000020060A06FB8 no ITSO_SVC3 00000200A0C006B2 no ITSO-Storwize-V7000-2 IBM_2145:ITSO_SVC4:admin>lspartnershipcandidate id configured name 000002006AC03A42 no ITSO_SVC2 0000020060A06FB8 no ITSO_SVC3 00000200A0C006B2 no ITSO-Storwize-V7000-2 000002006BE04FC4 no ITSO_SVC1
Example 9-143 shows the output of the lspartnership and lssystem commands, before setting up the Metro Mirror relationship. We show them so that you can compare with the same relationship after setting up the Metro Mirror relationship. From SVC 6.3 you may create a partnership between SVC system and IBM Storwize V7000 system, be aware that to do it, you need to change the layer parameter on IBM Storwize V7000 system, it must be changed from storage to replication with the chsystem command. This parameter can not be changed on SVC system, it is fixed to appliance.
Example 9-143 Pre-verification of system configuration IBM_2145:ITSO_SVC1:admin>lspartnership id name location partnership bandwidth 000002006BE04FC4 ITSO_SVC1 local
557
IBM_2145:ITSO_SVC4:admin>lspartnership id name location partnership bandwidth 0000020061C06FCA ITSO_SVC4 local IBM_2145:ITSO_SVC1:admin>lssystem id 000002006BE04FC4 name ITSO_SVC1 location local partnership bandwidth total_mdisk_capacity 766.5GB space_in_mdisk_grps 766.5GB space_allocated_to_vdisks 0.00MB total_free_space 766.5GB total_vdiskcopy_capacity 0.00MB total_used_capacity 0.00MB total_overallocation 0 total_vdisk_capacity 0.00MB total_allocated_extent_capacity 1.50GB statistics_status on statistics_frequency 15 cluster_locale en_US time_zone 520 US/Pacific code_level 6.3.0.0 (build 54.0.1109090000) console_IP 10.18.228.81:443 id_alias 000002006BE04FC4 gm_link_tolerance 300 gm_inter_cluster_delay_simulation 0 gm_intra_cluster_delay_simulation 0 gm_max_host_delay 5 email_reply email_contact email_contact_primary email_contact_alternate email_contact_location email_contact2 email_contact2_primary email_contact2_alternate email_state stopped inventory_mail_interval 0 cluster_ntp_IP_address cluster_isns_IP_address iscsi_auth_method chap iscsi_chap_secret passw0rd auth_service_configured no auth_service_enabled no auth_service_url auth_service_user_name auth_service_pwd_set no auth_service_cert_set no auth_service_type tip relationship_bandwidth_limit 25 tier generic_ssd tier_capacity 0.00MB tier_free_capacity 0.00MB tier generic_hdd tier_capacity 766.50GB tier_free_capacity 766.50GB has_nas_key no
558
layer appliance IBM_2145:ITSO_SVC4:admin>lssystem id 0000020061C06FCA name ITSO_SVC4 location local partnership bandwidth total_mdisk_capacity 768.0GB space_in_mdisk_grps 0 space_allocated_to_vdisks 0.00MB total_free_space 768.0GB total_vdiskcopy_capacity 0.00MB total_used_capacity 0.00MB total_overallocation 0 total_vdisk_capacity 0.00MB total_allocated_extent_capacity 0.00MB statistics_status on statistics_frequency 15 cluster_locale en_US time_zone 520 US/Pacific code_level 6.3.0.0 (build 54.0.1109090000) console_IP 10.18.228.84:443 id_alias 0000020061C06FCA gm_link_tolerance 300 gm_inter_cluster_delay_simulation 0 gm_intra_cluster_delay_simulation 0 gm_max_host_delay 5 email_reply email_contact email_contact_primary email_contact_alternate email_contact_location
559
IBM_2145:ITSO_SVC1:admin>mkpartnership -bandwidth 50 ITSO_SVC4 IBM_2145:ITSO_SVC1:admin>lspartnership id name location partnership bandwidth 000002006BE04FC4 ITSO_SVC1 local 0000020061C06FCA ITSO_SVC4 remote partially_configured_local 50 In Example 9-145, the partnership is created between ITSO_SVC4 back to ITSO_SVC1, specifying the bandwidth to be used for a background copy of 50 MBps. After creating the partnership, verify that the partnership is fully configured on both systems by reissuing the lspartnership command.
Example 9-145 Creating the partnership from ITSO_SVC4 to ITSO_SVC1 and verifying the partnership
IBM_2145:ITSO_SVC4:admin>mkpartnership -bandwidth 50 ITSO_SVC1 IBM_2145:ITSO_SVC4:admin>lspartnership id name location partnership bandwidth 0000020061C06FCA ITSO_SVC4 local 000002006BE04FC4 ITSO_SVC1 remote fully_configured 50
IBM_2145:ITSO_SVC1:admin>mkrcconsistgrp -cluster ITSO_SVC4 -name CG_W2K3_MM RC Consistency Group, id [0], successfully created IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp id name master_cluster_id master_cluster_name aux_cluster_id aux_cluster_name primary state relationship_count copy_type cycling_mode 0 CG_W2K3_MM 000002006BE04FC4 ITSO_SVC1 0000020061C06FCA ITSO_SVC4 empty 0 empty_group none
560
volumes in the ITSO_SVC1 system, and we then use the lsrcrelationshipcandidate command to show the volumes in the ITSO_SVC4 system. By using this command, we check the possible candidates for MM_DB_Pri. After checking all of these conditions, we use the mkrcrelationship command to create the Metro Mirror relationship. To verify the newly created Metro Mirror relationships, list them with the lsrcrelationship command.
Example 9-147 Creating Metro Mirror relationships MMREL1 and MMREL2 IBM_2145:ITSO_SVC1:admin>lsvdisk -filtervalue name=MM* id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state se_copy_count RC_change 0 MM_DB_Pri 0 io_grp0 online 0 Pool_DS3500-1 10.00GB striped 6005076801AF813F1000000000000031 0 1 empty 0 0 no 1 MM_DBLog_Pri 0 io_grp0 online 0 Pool_DS3500-1 10.00GB striped 6005076801AF813F1000000000000032 0 1 empty 0 0 no 2 MM_App_Pri 0 io_grp0 online 0 Pool_DS3500-1 10.00GB striped 6005076801AF813F1000000000000033 0 1 empty 0 0 no IBM_2145:ITSO_SVC1:admin>lsrcrelationshipcandidate id vdisk_name 0 MM_DB_Pri 1 MM_DBLog_Pri 2 MM_App_Pri IBM_2145:ITSO_SVC1:admin>lsrcrelationshipcandidate -aux ITSO_SVC4 -master MM_DB_Pri id vdisk_name 0 MM_DB_Sec 1 MM_DBLog_Sec 2 MM_App_Sec IBM_2145:ITSO_SVC1:admin>mkrcrelationship -master MM_DB_Pri -aux MM_DB_Sec -cluster ITSO_SVC4 -consistgrp CG_W2K3_MM -name MMREL1 RC Relationship, id [0], successfully created IBM_2145:ITSO_SVC1:admin>mkrcrelationship -master MM_Log_Pri -aux MM_Log_Sec -cluster ITSO_SVC4 -consistgrp CG_W2K3_MM -name MMREL2 RC Relationship, id [3], successfully created IBM_2145:ITSO_SVC1:admin>lsrcrelationship id name master_cluster_id master_cluster_name master_vdisk_id master_vdisk_name aux_cluster_id aux_cluster_name aux_vdisk_id aux_vdisk_name primary consistency_group_id consistency_group_name state bg_copy_priority progress copy_type cycling_mode 0 MMREL1 000002006BE04FC4 ITSO_SVC1 0 MM_DB_Pri 0000020061C06FCA ITSO_SVC4 0 MM_DB_Sec master 0 CG_W2K3_MM inconsistent_stopped 50 0 metro none 3 MMREL2 000002006BE04FC4 ITSO_SVC1 3 MM_Log_Pri 0000020061C06FCA ITSO_SVC4 3 MM_Log_Sec master 0 CG_W2K3_MM inconsistent_stopped 50 0 metro none
561
Notice that the state of MMREL3 is consistent_stopped. MMREL3 is in this state because it was created with the -sync option. The -sync option indicates that the secondary (auxiliary) volume is already synchronized with the primary (master) volume. Initial background synchronization is skipped when this option is used, even though the volumes are not actually synchronized in this scenario. We want to illustrate the option of pre-synchronized master and auxiliary volumes, before setting up the relationship. We have created the new relationship for MM_App_Sec using the -sync option. Tip: The -sync option is only used when the target volume has already mirrored all of the data from the source volume. By using this option, there is no initial background copy between the primary volume and the secondary volume. MMREL2 and MMREL1 are in the inconsistent_stopped state because they were not created with the -sync option, so their auxiliary volumes need to be synchronized with their primary volumes.
Example 9-148 Creating a stand-alone relationship and verifying it
IBM_2145:ITSO_SVC1:admin>mkrcrelationship -master MM_App_Pri -aux MM_App_Sec -sync -cluster ITSO_SVC4 -name MMREL3 RC Relationship, id [2], successfully created IBM_2145:ITSO_SVC1:admin>lsrcrelationship 2 id 2 name MMREL3 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 2 master_vdisk_name MM_App_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 2 aux_vdisk_name MM_App_Sec primary master consistency_group_id consistency_group_name state consistent_stopped bg_copy_priority 50 progress 100 freeze_time status online sync in_sync copy_type metro cycle_period_seconds 300 cycling_mode none master_change_vdisk_id master_change_vdisk_name aux_change_vdisk_id aux_change_vdisk_name
562
IBM_2145:ITSO_SVC1:admin>startrcrelationship MMREL3 IBM_2145:ITSO_SVC1:admin>lsrcrelationship MMREL3 id 2 name MMREL3 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 2 master_vdisk_name MM_App_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 2 aux_vdisk_name MM_App_Sec primary master consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type metro cycle_period_seconds 300 cycling_mode none master_change_vdisk_id master_change_vdisk_name aux_change_vdisk_id aux_change_vdisk_name
563
Upon completion of the background copy, it enters the Consistent synchronized state.
Example 9-150 Starting the Metro Mirror Consistency Group
IBM_2145:ITSO_SVC1:admin>startrcconsistgrp CG_W2K3_MM IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp id name master_cluster_id master_cluster_name aux_cluster_id aux_cluster_name primary state relationship_count copy_type cycling_mode 0 CG_W2K3_MM 000002006BE04FC4 ITSO_SVC1 0000020061C06FCA ITSO_SVC4 master inconsistent_copying 2 metro none
IBM_2145:ITSO_SVC1:admin>lsrcrelationship MMREL1 id 0 name MMREL1 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 0 master_vdisk_name MM_DB_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 0 aux_vdisk_name MM_DB_Sec primary master consistency_group_id 0 consistency_group_name CG_W2K3_MM state inconsistent_copying bg_copy_priority 50 progress 81 freeze_time status online sync copy_type metro cycle_period_seconds 300 cycling_mode none master_change_vdisk_id master_change_vdisk_name aux_change_vdisk_id aux_change_vdisk_name IBM_2145:ITSO_SVC1:admin>lsrcrelationship MMREL2 id 3 name MMREL2 564
IBM System Storage SAN Volume Controller V6.3
master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 3 master_vdisk_name MM_Log_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 3 aux_vdisk_name MM_Log_Sec primary master consistency_group_id 0 consistency_group_name CG_W2K3_MM state inconsistent_copying bg_copy_priority 50 progress 82 freeze_time status online sync copy_type metro cycle_period_seconds 300 cycling_mode none master_change_vdisk_id master_change_vdisk_name aux_change_vdisk_id aux_change_vdisk_name When all Metro Mirror relationships have completed the background copy the Consistency Group enters the Consistent synchronized state, as shown in Example 9-152.
Example 9-152 Listing the Metro Mirror Consistency Group
IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_MM id 0 name CG_W2K3_MM master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 primary master state consistent_synchronized relationship_count 2 freeze_time status sync copy_type metro cycle_period_seconds 300 cycling_mode none RC_rel_id 0 RC_rel_name MMREL1 RC_rel_id 3 RC_rel_name MMREL2
565
IBM_2145:ITSO_SVC1:admin>stoprcrelationship -access MMREL3 IBM_2145:ITSO_SVC1:admin>lsrcrelationship MMREL3 id 2 name MMREL3 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 2 master_vdisk_name MM_App_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 2 aux_vdisk_name MM_App_Sec primary consistency_group_id consistency_group_name state idling bg_copy_priority 50 progress freeze_time status sync in_sync copy_type metro cycle_period_seconds 300 cycling_mode none master_change_vdisk_id master_change_vdisk_name aux_change_vdisk_id aux_change_vdisk_name
IBM_2145:ITSO_SVC1:admin>stoprcconsistgrp CG_W2K3_MM IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_MM id 0 name CG_W2K3_MM master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 566
IBM System Storage SAN Volume Controller V6.3
aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 primary master state consistent_stopped relationship_count 2 freeze_time status sync in_sync copy_type metro cycle_period_seconds 300 cycling_mode none RC_rel_id 0 RC_rel_name MMREL1 RC_rel_id 3 RC_rel_name MMREL2 If, afterwards, we want to enable access (write I/O) to the secondary volume, we reissue the stoprcconsistgrp command specifying the -access flag. The Consistency Group transits to the Idling state as shown in Example 9-155.
Example 9-155 Stopping a Metro Mirror Consistency Group and enabling access to the secondary
IBM_2145:ITSO_SVC1:admin>stoprcconsistgrp -access CG_W2K3_MM IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_MM id 0 name CG_W2K3_MM master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 primary state idling relationship_count 2 freeze_time status sync in_sync copy_type metro cycle_period_seconds 300 cycling_mode none RC_rel_id 0 RC_rel_name MMREL1 RC_rel_id 3 RC_rel_name MMREL2
567
IBM_2145:ITSO_SVC1:admin>lsrcrelationship MMREL3 id 2 name MMREL3 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 2 master_vdisk_name MM_App_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 2 aux_vdisk_name MM_App_Sec primary master consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type metro cycle_period_seconds 300 cycling_mode none master_change_vdisk_id master_change_vdisk_name aux_change_vdisk_id aux_change_vdisk_name
IBM_2145:ITSO_SVC1:admin>startrcconsistgrp -force -primary aux CG_W2K3_MM IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_MM id 0 name CG_W2K3_MM master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 primary aux state consistent_synchronized relationship_count 2 freeze_time
568
status sync copy_type metro cycle_period_seconds 300 cycling_mode none RC_rel_id 0 RC_rel_name MMREL1 RC_rel_id 3 RC_rel_name MMREL2
IBM_2145:ITSO_SVC1:admin>lsrcrelationship MMREL3 id 2 name MMREL3 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 2 master_vdisk_name MM_App_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 2 aux_vdisk_name MM_App_Sec primary master consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync
569
copy_type metro cycle_period_seconds 300 cycling_mode none master_change_vdisk_id master_change_vdisk_name aux_change_vdisk_id aux_change_vdisk_name IBM_2145:ITSO_SVC1:admin>switchrcrelationship -primary aux MMREL3 IBM_2145:ITSO_SVC1:admin>lsrcrelationship MMREL3 id 2 name MMREL3 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 2 master_vdisk_name MM_App_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 2 aux_vdisk_name MM_App_Sec primary aux consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type metro cycle_period_seconds 300 cycling_mode none master_change_vdisk_id master_change_vdisk_name aux_change_vdisk_id aux_change_vdisk_name
570
Example 9-159 Switching the copy direction for a Metro Mirror Consistency Group
IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_MM id 0 name CG_W2K3_MM master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 primary master state consistent_synchronized relationship_count 2 freeze_time status sync copy_type metro cycle_period_seconds 300 cycling_mode none RC_rel_id 0 RC_rel_name MMREL1 RC_rel_id 3 RC_rel_name MMREL2 IBM_2145:ITSO_SVC1:admin>switchrcconsistgrp -primary aux CG_W2K3_MM IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_MM id 0 name CG_W2K3_MM master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 primary aux state consistent_synchronized relationship_count 2 freeze_time status sync copy_type metro cycle_period_seconds 300 cycling_mode none RC_rel_id 0 RC_rel_name MMREL1 RC_rel_id 3 RC_rel_name MMREL2
571
In this section, we describe how to configure the SVC system partnership for each configuration. Important: To have a supported and working configuration, all SVC systems must be at level 5.1 or higher. In our scenarios, we configure the SVC partnership by referring to the clustered systems as A, B, C, and D: ITSO_SVC1 = A ITSO_SVC2 = B ITSO_SVC3 = C ITSO_SVC4 = D Example 9-160 shows the available systems for a partnership using the lsclustercandidate command on each system.
Example 9-160 Available clustered systems
IBM_2145:ITSO_SVC1:admin>lspartnershipcandidate id configured name 0000020061C06FCA no ITSO_SVC4 0000020060A06FB8 no ITSO_SVC3 000002006AC03A42 no ITSO_SVC2 IBM_2145:ITSO_SVC2:admin>lspartnershipcandidate id configured name 0000020061C06FCA no ITSO_SVC4 000002006BE04FC4 no ITSO_SVC1 0000020060A06FB8 no ITSO_SVC3 IBM_2145:ITSO_SVC3:admin>lspartnershipcandidate id configured name 000002006BE04FC4 no ITSO_SVC1 0000020061C06FCA no ITSO_SVC4 000002006AC03A42 no ITSO_SVC2 IBM_2145:ITSO_SVC4:admin>lspartnershipcandidate id configured name 000002006BE04FC4 no ITSO_SVC1 0000020060A06FB8 no ITSO_SVC3 000002006AC03A42 no ITSO_SVC2
572
Example 9-161 shows the sequence of mkpartnership commands to execute to create a star configuration.
Example 9-161 Creating a star configuration using the mkpartnership command
From ITSO_SVC1 to multiple systems IBM_2145:ITSO_SVC1:admin>mkpartnership -bandwidth 50 ITSO_SVC2 IBM_2145:ITSO_SVC1:admin>mkpartnership -bandwidth 50 ITSO_SVC3 IBM_2145:ITSO_SVC1:admin>mkpartnership -bandwidth 50 ITSO_SVC4 From ITSO_SVC2 to ITSO_SVC1 IBM_2145:ITSO_SVC2:admin>mkpartnership -bandwidth 50 ITSO_SVC1 From ITSO_SVC3 to ITSO_SVC1 IBM_2145:ITSO_SVC3:admin>mkpartnership -bandwidth 50 ITSO_SVC1 From ITSO_SVC4 to ITSO_SVC1 IBM_2145:ITSO_SVC4:admin>mkpartnership -bandwidth 50 ITSO_SVC1 From ITSO_SVC1 IBM_2145:ITSO_SVC1:admin>lspartnership id name location partnership 000002006BE04FC4 ITSO_SVC1 local 000002006AC03A42 ITSO_SVC2 remote fully_configured 0000020060A06FB8 ITSO_SVC3 remote fully_configured 0000020061C06FCA ITSO_SVC4 remote fully_configured From ITSO_SVC2 IBM_2145:ITSO_SVC2:admin>lspartnership id name location partnership bandwidth
Chapter 9. SAN Volume Controller operations using the command-line interface
bandwidth 50 50 50
573
local remote
fully_configured
50
IBM_2145:ITSO_SVC3:admin>lspartnership id name location partnership bandwidth 0000020060A06FB8 ITSO_SVC3 local 000002006BE04FC4 ITSO_SVC1 remote fully_configured 50 From ITSO_SVC4 IBM_2145:ITSO_SVC4:admin>lspartnership id name location partnership bandwidth 0000020061C06FCA ITSO_SVC4 local 000002006BE04FC4 ITSO_SVC1 remote fully_configured 50 After the SVC partnership has been configured, you can configure any rcrelationship or rcconsistgrp that you need. Make sure that a single volume is only in one relationship.
Triangle configuration
Figure 9-9 shows the triangle configuration.
Example 9-162 shows the sequence of mkpartnership commands to execute to create a triangle configuration.
Example 9-162 Creating a triangle configuration
From ITSO_SVC1 to ITSO_SVC2 and ITSO_SVC3 IBM_2145:ITSO_SVC1:admin>mkpartnership -bandwidth 50 ITSO_SVC2 IBM_2145:ITSO_SVC1:admin>mkpartnership -bandwidth 50 ITSO_SVC3 IBM_2145:ITSO_SVC1:admin>lspartnership id name location partnership bandwidth 000002006BE04FC4 ITSO_SVC1 local 000002006AC03A42 ITSO_SVC2 remote partially_configured_local 50 0000020060A06FB8 ITSO_SVC3 remote partially_configured_local 50
574
From ITSO_SVC2 to ITSO_SVC1 and ITSO_SVC3 IBM_2145:ITSO_SVC2:admin>mkpartnership IBM_2145:ITSO_SVC2:admin>mkpartnership IBM_2145:ITSO_SVC2:admin>lspartnership id name bandwidth 000002006AC03A42 ITSO_SVC2 000002006BE04FC4 ITSO_SVC1 0000020060A06FB8 ITSO_SVC3 -bandwidth 50 ITSO_SVC1 -bandwidth 50 ITSO_SVC3 location partnership local remote remote
fully_configured 50 partially_configured_local 50
From ITSO_SVC3 to ITSO_SVC1 and ITSO_SVC2 IBM_2145:ITSO_SVC3:admin>mkpartnership -bandwidth 50 IBM_2145:ITSO_SVC3:admin>mkpartnership -bandwidth 50 IBM_2145:ITSO_SVC3:admin>lspartnership id name location partnership 0000020060A06FB8 ITSO_SVC3 local 000002006BE04FC4 ITSO_SVC1 remote fully_configured 000002006AC03A42 ITSO_SVC2 remote fully_configured ITSO_SVC1 ITSO_SVC2 bandwidth 50 50
After the SVC partnership has been configured, you can configure any rcrelationship or rcconsistgrp that you need. Make sure that a single volume is only in one relationship.
Example 9-163 shows the sequence of mkpartnership commands to execute to create a fully connected configuration.
Example 9-163 Creating a fully connected configuration
575
IBM_2145:ITSO_SVC1:admin>mkpartnership -bandwidth 50 ITSO_SVC3 IBM_2145:ITSO_SVC1:admin>mkpartnership -bandwidth 50 ITSO_SVC4 IBM_2145:ITSO_SVC1:admin>lspartnership id name location partnership 000002006BE04FC4 ITSO_SVC1 local 000002006AC03A42 ITSO_SVC2 remote partially_configured_local 0000020060A06FB8 ITSO_SVC3 remote partially_configured_local 0000020061C06FCA ITSO_SVC4 remote partially_configured_local From ITSO_SVC2 to ITSO_SVC1, ITSO_SVC3 and ITSO-SVC4 IBM_2145:ITSO_SVC2:admin>mkpartnership -bandwidth 50 ITSO_SVC1 IBM_2145:ITSO_SVC2:admin>mkpartnership -bandwidth 50 ITSO_SVC3 IBM_2145:ITSO_SVC2:admin>mkpartnership -bandwidth 50 ITSO_SVC4 IBM_2145:ITSO_SVC2:admin>lspartnership id name location partnership 000002006AC03A42 ITSO_SVC2 local 000002006BE04FC4 ITSO_SVC1 remote fully_configured 0000020060A06FB8 ITSO_SVC3 remote partially_configured_local 0000020061C06FCA ITSO_SVC4 remote partially_configured_local From ITSO_SVC3 to ITSO_SVC1, ITSO_SVC3 and ITSO-SVC4 IBM_2145:ITSO_SVC3:admin>mkpartnership -bandwidth 50 ITSO_SVC1 IBM_2145:ITSO_SVC3:admin>mkpartnership -bandwidth 50 ITSO_SVC2 IBM_2145:ITSO_SVC3:admin>mkpartnership -bandwidth 50 ITSO_SVC4 IBM_2145:ITSO_SVC3:admin>lspartnership id name location partnership 0000020060A06FB8 ITSO_SVC3 local 000002006BE04FC4 ITSO_SVC1 remote fully_configured 000002006AC03A42 ITSO_SVC2 remote fully_configured 0000020061C06FCA ITSO_SVC4 remote partially_configured_local From ITSO-SVC4 to ITSO_SVC1, ITSO_SVC2 and ITSO_SVC3 IBM_2145:ITSO_SVC4:admin>mkpartnership -bandwidth 50 IBM_2145:ITSO_SVC4:admin>mkpartnership -bandwidth 50 IBM_2145:ITSO_SVC4:admin>mkpartnership -bandwidth 50 IBM_2145:ITSO_SVC4:admin>lspartnership id name location partnership 0000020061C06FCA ITSO_SVC4 local 000002006BE04FC4 ITSO_SVC1 remote fully_configured 000002006AC03A42 ITSO_SVC2 remote fully_configured 0000020060A06FB8 ITSO_SVC3 remote fully_configured ITSO_SVC1 ITSO_SVC2 ITSO_SVC3 bandwidth 50 50 50
bandwidth 50 50 50
bandwidth 50 50 50
bandwidth 50 50 50
After the SVC partnership has been configured, you can configure any rcrelationship or rcconsistgrp that you need. Make sure that a single volume is only in one relationship.
Daisy-chain configuration
Figure 9-11 on page 577 shows the daisy-chain configuration.
576
Example 9-164 shows the sequence of mkpartnership commands to execute to create a daisy-chain configuration.
Example 9-164 Creating a daisy-chain configuration
From ITSO_SVC1 to ITSO_SVC2 IBM_2145:ITSO_SVC1:admin>mkpartnership -bandwidth 50 ITSO_SVC2 IBM_2145:ITSO_SVC1:admin>lspartnership id name location partnership bandwidth 000002006BE04FC4 ITSO_SVC1 local 000002006AC03A42 ITSO_SVC2 remote partially_configured_local 50 From ITSO_SVC2 to ITSO_SVC1 and ITSO_SVC3 IBM_2145:ITSO_SVC2:admin>mkpartnership -bandwidth 50 ITSO_SVC1 IBM_2145:ITSO_SVC2:admin>mkpartnership -bandwidth 50 ITSO_SVC3 IBM_2145:ITSO_SVC2:admin>lspartnership id name location partnership bandwidth 000002006AC03A42 ITSO_SVC2 local 000002006BE04FC4 ITSO_SVC1 remote fully_configured 50 0000020060A06FB8 ITSO_SVC3 remote partially_configured_local 50 From ITSO_SVC3 to ITSO_SVC2 and ITSO_SVC4 IBM_2145:ITSO_SVC3:admin>mkpartnership -bandwidth 50 ITSO_SVC2 IBM_2145:ITSO_SVC3:admin>mkpartnership -bandwidth 50 ITSO_SVC4 IBM_2145:ITSO_SVC3:admin>lspartnership id name location partnership bandwidth 0000020060A06FB8 ITSO_SVC3 local 000002006AC03A42 ITSO_SVC2 remote fully_configured 50 0000020061C06FCA ITSO_SVC4 remote partially_configured_local 50 From ITSO_SVC4 to ITSO_SVC3 IBM_2145:ITSO_SVC4:admin>mkpartnership -bandwidth 50 ITSO_SVC3 IBM_2145:ITSO_SVC4:admin>lspartnership id name location partnership bandwidth 0000020061C06FCA ITSO_SVC4 local 0000020060A06FB8 ITSO_SVC3 remote fully_configured 50
577
After the SVC partnership has been configured, you can configure any rcrelationship or rcconsistgrp that you need. Make sure that a single volume is only in one relationship.
578
Because data consistency is needed across GM_DB_Pri and GM_DBLog_Pri, we create a Consistency Group to handle Global Mirror relationships for them. Because in this scenario the application files are independent of the database, we create a stand-alone Global Mirror relationship for GM_App_Pri. Figure 9-12 illustrates the Global Mirror relationship setup.
579
Preverification
To verify that both clustered systems can communicate with each other, use the lspartnership command. Example 9-165 confirms that our clustered systems are communicating, because ITSO_SVC4 is an eligible SVC system candidate at ITSO_SVC1 for the SVC system partnership, and vice versa. Therefore, both systems are communicating with each other.
Example 9-165 Listing the available SVC systems for partnership
580
IBM_2145:ITSO_SVC4:admin>lspartnershipcandidate id configured name 000002006BE04FC4 no ITSO_SVC1 In Example 9-166, we show the output of the lspartnership command before setting up the SVC systems partnership for Global Mirror. We show this output for comparison after we have set up the SVC partnership.
Example 9-166 Pre-verification of system configuration
IBM_2145:ITSO_SVC1:admin>lspartnership id name location partnership bandwidth 000002006BE04FC4 ITSO_SVC1 local IBM_2145:ITSO_SVC4:admin>lspartnership id name location partnership bandwidth 0000020061C06FCA ITSO_SVC4 local
IBM_2145:ITSO_SVC1:admin>mkpartnership -bandwidth 100 ITSO_SVC4 IBM_2145:ITSO_SVC1:admin>lspartnership id name location partnership bandwidth 000002006BE04FC4 ITSO_SVC1 local 0000020061C06FCA ITSO_SVC4 remote partially_configured_local 100 In Example 9-168, we create the partnership from ITSO_SVC4 back to ITSO_SVC1, specifying a 100 MBps bandwidth to be used for the background copy. After creating the partnership, verify that the partnership is fully configured by reissuing the lspartnership command.
Example 9-168 Creating the partnership from ITSO_SVC4 to ITSO_SVC1 and verifying the partnership
IBM_2145:ITSO_SVC4:admin>mkpartnership -bandwidth 100 ITSO_SVC1 IBM_2145:ITSO_SVC4:admin>lspartnership id name location partnership bandwidth 0000020061C06FCA ITSO_SVC4 local 000002006BE04FC4 ITSO_SVC1 remote fully_configured 100 IBM_2145:ITSO_SVC1:admin>lspartnership id name location partnership bandwidth 000002006BE04FC4 ITSO_SVC1 local 0000020061C06FCA ITSO_SVC4 remote fully_configured 100
581
IBM_2145:ITSO_SVC1:admin>chsystem -gminterdelaysimulation 20 IBM_2145:ITSO_SVC1:admin>chsystem -gmintradelaysimulation 40 IBM_2145:ITSO_SVC1:admin>chsystem -gmlinktolerance 200 IBM_2145:ITSO_SVC1:admin>lssystem id 000002006BE04FC4 name ITSO_SVC1 location local partnership bandwidth total_mdisk_capacity 866.5GB
582
space_in_mdisk_grps 766.5GB space_allocated_to_vdisks 30.00GB total_free_space 836.5GB total_vdiskcopy_capacity 30.00GB total_used_capacity 30.00GB total_overallocation 3 total_vdisk_capacity 30.00GB total_allocated_extent_capacity 31.50GB statistics_status on statistics_frequency 15 cluster_locale en_US time_zone 520 US/Pacific code_level 6.3.0.0 (build 54.0.1109090000) console_IP 10.18.228.81:443 id_alias 000002006BE04FC4 gm_link_tolerance 200 gm_inter_cluster_delay_simulation 20 gm_intra_cluster_delay_simulation 40 gm_max_host_delay 5 email_reply email_contact email_contact_primary email_contact_alternate email_contact_location email_contact2 email_contact2_primary email_contact2_alternate email_state stopped inventory_mail_interval 0 cluster_ntp_IP_address cluster_isns_IP_address iscsi_auth_method chap iscsi_chap_secret passw0rd auth_service_configured no auth_service_enabled no auth_service_url auth_service_user_name auth_service_pwd_set no auth_service_cert_set no auth_service_type tip relationship_bandwidth_limit 25 tier generic_ssd tier_capacity 0.00MB tier_free_capacity 0.00MB tier generic_hdd tier_capacity 766.50GB tier_free_capacity 736.50GB has_nas_key no layer appliance
583
IBM_2145:ITSO_SVC1:admin>mkrcconsistgrp -cluster ITSO_SVC4 -name CG_W2K3_GM RC Consistency Group, id [0], successfully created IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp id name master_cluster_id master_cluster_name aux_cluster_id aux_cluster_name primary state relationship_count copy_type cycling_mode 0 CG_W2K3_GM 000002006BE04FC4 ITSO_SVC1 0000020061C06FCA ITSO_SVC4 empty 0 empty_group none
584
IBM_2145:ITSO_SVC1:admin>mkrcrelationship -master GM_DBLog_Pri -aux GM_DBLog_Sec -cluster ITSO_SVC4 -consistgrp CG_W2K3_GM -name GMREL2 -global RC Relationship, id [1], successfully created IBM_2145:ITSO_SVC1:admin>mkrcrelationship -master GM_DB_Pri -aux GM_DB_Sec -cluster ITSO_SVC4 -consistgrp CG_W2K3_GM -name GMREL1 -global RC Relationship, id [2], successfully created IBM_2145:ITSO_SVC1:admin>mkrcrelationship -master GM_DBLog_Pri -aux GM_DBLog_Sec -cluster ITSO_SVC4 -consistgrp CG_W2K3_GM -name GMREL2 -global RC Relationship, id [3], successfully created IBM_2145:ITSO_SVC1:admin>lsrcrelationship id name master_cluster_id master_cluster_name master_vdisk_id master_vdisk_name aux_cluster_id aux_cluster_name aux_vdisk_id aux_vdisk_name primary consistency_group_id consistency_group_name state bg_copy_priority progress copy_type cycling_mode 0 GMREL1 000002006BE04FC4 ITSO_SVC1 0 GM_DB_Pri 0000020061C06FCA ITSO_SVC4 0 GM_DB_Sec master 0 CG_W2K3_GM inconsistent_stopped 50 0 global none 1 GMREL2 000002006BE04FC4 ITSO_SVC1 1 GM_DBLog_Pri 0000020061C06FCA ITSO_SVC4 1 GM_DBLog_Sec master 0 CG_W2K3_GM inconsistent_stopped 50 0 global none
585
When implementing Global Mirror, the goal is to reach a consistent and synchronized state that can provide redundancy in case a hardware failure occurs that affects the SAN at the production site. In this section, we show how to start the stand-alone Global Mirror relationships and the Consistency Group.
IBM_2145:ITSO_SVC1:admin>startrcrelationship GMREL3 IBM_2145:ITSO_SVC1:admin>lsrcrelationship GMREL3 id 2 name GMREL3 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 2 master_vdisk_name GM_App_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 2 aux_vdisk_name GM_App_Sec primary master consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type global cycle_period_seconds 300 cycling_mode none master_change_vdisk_id master_change_vdisk_name aux_change_vdisk_id aux_change_vdisk_name
586
IBM_2145:ITSO_SVC1:admin>startrcconsistgrp CG_W2K3_GM IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp 0 id 0 name CG_W2K3_GM master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 primary master state inconsistent_copying relationship_count 2 freeze_time status sync copy_type global cycle_period_seconds 300 cycling_mode none RC_rel_id 0 RC_rel_name GMREL1 RC_rel_id 1 RC_rel_name GMREL2
IBM_2145:ITSO_SVC1:admin>lsrcrelationship GMREL1 id 0 name GMREL1 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 0 master_vdisk_name GM_DB_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 0 aux_vdisk_name GM_DB_Sec primary master consistency_group_id 0 consistency_group_name CG_W2K3_GM state inconsistent_copying bg_copy_priority 50 progress 38 freeze_time status online
Chapter 9. SAN Volume Controller operations using the command-line interface
587
sync copy_type global cycle_period_seconds 300 cycling_mode none master_change_vdisk_id master_change_vdisk_name aux_change_vdisk_id aux_change_vdisk_name IBM_2145:ITSO_SVC1:admin>lsrcrelationship GMREL2 id 1 name GMREL2 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 1 master_vdisk_name GM_DBLog_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 1 aux_vdisk_name GM_DBLog_Sec primary master consistency_group_id 0 consistency_group_name CG_W2K3_GM state inconsistent_copying bg_copy_priority 50 progress 76 freeze_time status online sync copy_type global cycle_period_seconds 300 cycling_mode none master_change_vdisk_id master_change_vdisk_name aux_change_vdisk_id aux_change_vdisk_name When all of the Global Mirror relationships complete the background copy, the Consistency Group enters the Consistent synchronized state, as shown in Example 9-152 on page 565.
Example 9-176 Listing the Global Mirror Consistency Group
IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 primary master state consistent_synchronized relationship_count 2 freeze_time status sync copy_type global 588
IBM System Storage SAN Volume Controller V6.3
cycle_period_seconds 300 cycling_mode none RC_rel_id 0 RC_rel_name GMREL1 RC_rel_id 1 RC_rel_name GMREL2
IBM_2145:ITSO_SVC1:admin>stoprcrelationship -access GMREL3 IBM_2145:ITSO_SVC1:admin>lsrcrelationship GMREL3 id 2 name GMREL3 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 2 master_vdisk_name GM_App_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 2 aux_vdisk_name GM_App_Sec primary consistency_group_id consistency_group_name state idling bg_copy_priority 50 progress freeze_time status sync in_sync copy_type global cycle_period_seconds 300 cycling_mode none master_change_vdisk_id master_change_vdisk_name aux_change_vdisk_id aux_change_vdisk_name
589
IBM_2145:ITSO_SVC1:admin>stoprcconsistgrp CG_W2K3_GM IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 primary master state consistent_stopped relationship_count 2 freeze_time status sync in_sync copy_type global cycle_period_seconds 300 cycling_mode none RC_rel_id 0 RC_rel_name GMREL1 RC_rel_id 1 RC_rel_name GMREL2 If, afterwards, we want to enable access (write I/O) for the secondary volume, we can reissue the stoprcconsistgrp command specifying the -access parameter. The Consistency Group transits to the Idling state, as shown in Example 9-179.
Example 9-179 Stopping a Global Mirror Consistency Group
IBM_2145:ITSO_SVC1:admin>stoprcconsistgrp -access CG_W2K3_GM IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 primary state idling relationship_count 2 freeze_time status sync in_sync copy_type global cycle_period_seconds 300 cycling_mode none RC_rel_id 0 RC_rel_name GMREL1 RC_rel_id 1
590
RC_rel_name GMREL2
IBM_2145:ITSO_SVC1:admin>startrcrelationship -primary master -force GMREL3 IBM_2145:ITSO_SVC1:admin>lsrcrelationship GMREL3 id 2 name GMREL3 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 2 master_vdisk_name GM_App_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 2 aux_vdisk_name GM_App_Sec primary master consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type global cycle_period_seconds 300 cycling_mode none master_change_vdisk_id master_change_vdisk_name aux_change_vdisk_id aux_change_vdisk_name
591
Example 9-181 Restarting a Global Mirror relationship while changing the copy direction
IBM_2145:ITSO_SVC1:admin>startrcconsistgrp -primary aux CG_W2K3_GM IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 primary aux state consistent_synchronized relationship_count 2 freeze_time status sync copy_type global cycle_period_seconds 300 cycling_mode none RC_rel_id 0 RC_rel_name GMREL1 RC_rel_id 1 RC_rel_name GMREL2
IBM_2145:ITSO_SVC1:admin>lsrcrelationship GMREL3 id 2 name GMREL3 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 2 master_vdisk_name GM_App_Pri 592
IBM System Storage SAN Volume Controller V6.3
aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 2 aux_vdisk_name GM_App_Sec primary master consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type global cycle_period_seconds 300 cycling_mode none master_change_vdisk_id master_change_vdisk_name aux_change_vdisk_id aux_change_vdisk_name IBM_2145:ITSO_SVC1:admin>switchrcrelationship -primary aux GMREL3 IBM_2145:ITSO_SVC1:admin>lsrcrelationship GMREL3 id 2 name GMREL3 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 2 master_vdisk_name GM_App_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 2 aux_vdisk_name GM_App_Sec primary aux consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type global cycle_period_seconds 300 cycling_mode none master_change_vdisk_id master_change_vdisk_name aux_change_vdisk_id aux_change_vdisk_name
593
and specifying the primary volume. If the volume that is specified as the primary when issuing this command is already a primary, the command has no effect. In Example 9-183, we change the copy direction for the Global Mirror Consistency Group, specifying the auxiliary to become the primary. Important: When the copy direction is switched, it is crucial that there is no outstanding I/O to the volume that transits from primary to secondary, because all I/O will be inhibited when that volume becomes the secondary. Therefore, careful planning is required prior to using the switchrcconsistgrp command.
Example 9-183 Switching the copy direction for a Global Mirror Consistency Group
IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 primary master state consistent_synchronized relationship_count 2 freeze_time status sync copy_type global cycle_period_seconds 300 cycling_mode none RC_rel_id 0 RC_rel_name GMREL1 RC_rel_id 1 RC_rel_name GMREL2 IBM_2145:ITSO_SVC1:admin>switchrcconsistgrp -primary aux CG_W2K3_GM IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 primary aux state consistent_synchronized relationship_count 2 freeze_time status sync copy_type global cycle_period_seconds 300 cycling_mode none RC_rel_id 0 RC_rel_name GMREL1 RC_rel_id 1 RC_rel_name GMREL2
594
595
We assume that the source and target volumes have already been created and that the ISLs and zoning are in place, enabling the SVC systems to communicate. We also assume that the Global Mirror relationship has been already established. To change the Global Mirror to cycling mode with change volumes, perform the following steps: Create thin provisioned change volumes for primary and secondary volumes both sites Stop standalone relationship GMREL3 to change the cycling mode primary site Set cycling mode on standalone relationship GMREL3 primary site Set change volume on master volume relationship GMREL3 primary site Set change volume on auxiliary volume relationship GMREL3 secondary site Start standalone relationship GMREL3 in cycling mode primary site Stop Consistency Group CG_W2K3_GM to change the cycling mode primary site Set cycling mode on Consistency Group primary site Set change volume on master volume relationship GMREL1 of Consistency Group CG_W2K3_GM primary site Set change volume on auxiliary volume relationship GMREL1 secondary site Set change volume on master volume relationship GMREL2 of Consistency Group CG_W2K3_GM primary site Set change volume on auxiliary volume relationship GMREL2 secondary site Start Consistency Group CG_W2K3_GM in cycling mode primary site
IBM_2145:ITSO_SVC1:admin>mkvdisk -iogrp 0 -mdiskgrp 0 -size 10 -unit gb -rsize 20% -autoexpand -grainsize 32 -name GM_DB_Pri_CHANGE_VOL Virtual Disk, id [3], successfully created IBM_2145:ITSO_SVC1:admin>mkvdisk -iogrp 0 -mdiskgrp 0 -size 10 -unit gb -rsize 20% -autoexpand -grainsize 32 -name GM_DBLog_Pri_CHANGE_VOL 596
IBM System Storage SAN Volume Controller V6.3
Virtual Disk, id [4], successfully created IBM_2145:ITSO_SVC1:admin>mkvdisk -iogrp 0 -mdiskgrp 0 -size 10 -unit gb -rsize 20% -autoexpand -grainsize 32 -name GM_App_Pri_CHANGE_VOL Virtual Disk, id [5], successfully created IBM_2145:ITSO_SVC4:admin>mkvdisk -iogrp 0 -mdiskgrp 0 -size 10 -unit gb -rsize 20% -autoexpand -grainsize 32 -name GM_DB_Sec_CHANGE_VOL Virtual Disk, id [3], successfully created IBM_2145:ITSO_SVC4:admin>mkvdisk -iogrp 0 -mdiskgrp 0 -size 10 -unit gb -rsize 20% -autoexpand -grainsize 32 -name GM_DBLog_Sec_CHANGE_VOL Virtual Disk, id [4], successfully created IBM_2145:ITSO_SVC4:admin>mkvdisk -iogrp 0 -mdiskgrp 0 -size 10 -unit gb -rsize 20% -autoexpand -grainsize 32 -name GM_App_Sec_CHANGE_VOL Virtual Disk, id [5], successfully created
IBM_2145:ITSO_SVC1:admin>lsrcrelationship id name master_cluster_id master_cluster_name master_vdisk_id master_vdisk_name aux_cluster_id aux_cluster_name aux_vdisk_id aux_vdisk_name primary consistency_group_id consistency_group_name state bg_copy_priority progress copy_type cycling_mode 0 GMREL1 000002006BE04FC4 ITSO_SVC1 0 GM_DB_Pri 0000020061C06FCA ITSO_SVC4 0 GM_DB_Sec aux 0 CG_W2K3_GM consistent_synchronized 50 global none 1 GMREL2 000002006BE04FC4 ITSO_SVC1 1 GM_DBLog_Pri 0000020061C06FCA ITSO_SVC4 1 GM_DBLog_Sec aux 0 CG_W2K3_GM consistent_synchronized 50 global none 2 GMREL3 000002006BE04FC4 ITSO_SVC1 2 GM_App_Pri 0000020061C06FCA ITSO_SVC4 2 GM_App_Sec aux consistent_synchronized 50 global none IBM_2145:ITSO_SVC1:admin>stoprcrelationship GMREL3
GMREL3
597
IBM_2145:ITSO_SVC1:admin>chrcrelationship -masterchange GM_App_Pri_CHANGE_VOL IBM_2145:ITSO_SVC1:admin>lsrcrelationship GMREL3 id 2 name GMREL3 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 2 master_vdisk_name GM_App_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 2 aux_vdisk_name GM_App_Sec primary aux consistency_group_id consistency_group_name state consistent_stopped bg_copy_priority 50 progress 100 freeze_time status online sync in_sync copy_type global cycle_period_seconds 300 cycling_mode multi master_change_vdisk_id 5 master_change_vdisk_name GM_App_Pri_CHANGE_VOL aux_change_vdisk_id aux_change_vdisk_name
IBM_2145:ITSO_SVC4:admin>chrcrelationship -auxchange GM_App_Sec_CHANGE_VOL 2 IBM_2145:ITSO_SVC4:admin> IBM_2145:ITSO_SVC4:admin>lsrcrelationship GMREL3 id 2 name GMREL3 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 2 master_vdisk_name GM_App_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 2
598
aux_vdisk_name GM_App_Sec primary aux consistency_group_id consistency_group_name state consistent_stopped bg_copy_priority 50 progress 100 freeze_time status online sync in_sync copy_type global cycle_period_seconds 300 cycling_mode multi master_change_vdisk_id 5 master_change_vdisk_name GM_App_Pri_CHANGE_VOL aux_change_vdisk_id 5 aux_change_vdisk_name GM_App_Sec_CHANGE_VOL
IBM_2145:ITSO_SVC1:admin>startrcrelationship GMREL3 IBM_2145:ITSO_SVC1:admin>lsrcrelationship GMREL3 id 2 name GMREL3 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 2 master_vdisk_name GM_App_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 2 aux_vdisk_name GM_App_Sec primary aux consistency_group_id consistency_group_name state consistent_copying bg_copy_priority 50 progress 100 freeze_time 2011/10/04/20/37/20 status online sync copy_type global cycle_period_seconds 300 cycling_mode multi master_change_vdisk_id 5 master_change_vdisk_name GM_App_Pri_CHANGE_VOL aux_change_vdisk_id 5 aux_change_vdisk_name GM_App_Sec_CHANGE_VOL
599
IBM_2145:ITSO_SVC1:admin>lsrcrelationship GMREL3 id 2 name GMREL3 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 2 master_vdisk_name GM_App_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 2 aux_vdisk_name GM_App_Sec primary aux consistency_group_id consistency_group_name state consistent_copying bg_copy_priority 50 progress 100 freeze_time 2011/10/04/20/42/25 status online sync copy_type global cycle_period_seconds 300 cycling_mode multi master_change_vdisk_id 5 master_change_vdisk_name GM_App_Pri_CHANGE_VOL aux_change_vdisk_id 5 aux_change_vdisk_name GM_App_Sec_CHANGE_VOL
IBM_2145:ITSO_SVC1:admin>stoprcconsistgrp CG_W2K3_GM IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 primary aux state consistent_stopped relationship_count 2 freeze_time status sync in_sync copy_type global cycle_period_seconds 300 cycling_mode none RC_rel_id 0
600
IBM_2145:ITSO_SVC1:admin>chrcconsistgrp -cyclingmode multi CG_W2K3_GM IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 primary aux state consistent_stopped relationship_count 2 freeze_time status sync in_sync copy_type global cycle_period_seconds 300 cycling_mode multi RC_rel_id 0 RC_rel_name GMREL1 RC_rel_id 1 RC_rel_name GMREL2
9.14.28 Set change volume on master volume relationships of the Consistency Group
In Example 9-192 we change both relationships of the Consistency Group to add the change volume on primary volumes. A display shows the name of the master change volumes.
Example 9-192 Set change volume on master volume
IBM_2145:ITSO_SVC1:admin>chrcrelationship -masterchange GM_DB_Pri_CHANGE_VOL GMREL1 IBM_2145:ITSO_SVC1:admin>lsrcrelationship GMREL1 id 0 name GMREL1 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 0 master_vdisk_name GM_DB_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4
601
aux_vdisk_id 0 aux_vdisk_name GM_DB_Sec primary aux consistency_group_id 0 consistency_group_name CG_W2K3_GM state consistent_stopped bg_copy_priority 50 progress 100 freeze_time status online sync in_sync copy_type global cycle_period_seconds 300 cycling_mode multi master_change_vdisk_id 3 master_change_vdisk_name GM_DB_Pri_CHANGE_VOL aux_change_vdisk_id aux_change_vdisk_name IBM_2145:ITSO_SVC1:admin> IBM_2145:ITSO_SVC1:admin>chrcrelationship -masterchange GM_DBLog_Pri_CHANGE_VOL GMREL2 IBM_2145:ITSO_SVC1:admin>lsrcrelationship GMREL2 id 1 name GMREL2 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 1 master_vdisk_name GM_DBLog_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 1 aux_vdisk_name GM_DBLog_Sec primary aux consistency_group_id 0 consistency_group_name CG_W2K3_GM state consistent_stopped bg_copy_priority 50 progress 100 freeze_time status online sync in_sync copy_type global cycle_period_seconds 300 cycling_mode multi master_change_vdisk_id 4 master_change_vdisk_name GM_DBLog_Pri_CHANGE_VOL aux_change_vdisk_id aux_change_vdisk_name
602
IBM_2145:ITSO_SVC4:admin>chrcrelationship -auxchange GM_DB_Sec_CHANGE_VOL GMREL1 IBM_2145:ITSO_SVC4:admin>lsrcrelationship GMREL1 id 0 name GMREL1 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 0 master_vdisk_name GM_DB_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 0 aux_vdisk_name GM_DB_Sec primary aux consistency_group_id 0 consistency_group_name CG_W2K3_GM state consistent_stopped bg_copy_priority 50 progress 100 freeze_time status online sync in_sync copy_type global cycle_period_seconds 300 cycling_mode multi master_change_vdisk_id 3 master_change_vdisk_name GM_DB_Pri_CHANGE_VOL aux_change_vdisk_id 3 aux_change_vdisk_name GM_DB_Sec_CHANGE_VOL IBM_2145:ITSO_SVC4:admin>chrcrelationship -auxchange GM_DBLog_Sec_CHANGE_VOL GMREL2 IBM_2145:ITSO_SVC4:admin>lsrcrelationship GMREL2 id 1 name GMREL2 master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 master_vdisk_id 1 master_vdisk_name GM_DBLog_Pri aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 aux_vdisk_id 1 aux_vdisk_name GM_DBLog_Sec primary aux consistency_group_id 0 consistency_group_name CG_W2K3_GM state consistent_stopped bg_copy_priority 50 progress 100 freeze_time
603
status online sync in_sync copy_type global cycle_period_seconds 300 cycling_mode multi master_change_vdisk_id 4 master_change_vdisk_name GM_DBLog_Pri_CHANGE_VOL aux_change_vdisk_id 4 aux_change_vdisk_name GM_DBLog_Sec_CHANGE_VOL
IBM_2145:ITSO_SVC1:admin>startrcconsistgrp CG_W2K3_GM IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 primary aux state consistent_copying relationship_count 2 freeze_time 2011/10/04/21/02/33 status sync copy_type global cycle_period_seconds 300 cycling_mode multi RC_rel_id 0 RC_rel_name GMREL1 RC_rel_id 1 RC_rel_name GMREL2 IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006BE04FC4 master_cluster_name ITSO_SVC1 aux_cluster_id 0000020061C06FCA aux_cluster_name ITSO_SVC4 primary aux state consistent_copying relationship_count 2 freeze_time 2011/10/04/21/07/42 status sync copy_type global cycle_period_seconds 300
604
605
Before you upgrade the SVC software, ensure that all I/O paths between all hosts and SANs are working. Otherwise, the applications might have I/O failures during the software upgrade. Ensure that all I/O paths between all hosts and SANs are working by using the Subsystem Device Driver (SDD) command. Example 9-195 shows the output.
Example 9-195 Query adapter
#datapath query adapter Active Adapters :2 Adpt# 0 1 Name State fscsi0 NORMAL fscsi1 NORMAL Mode ACTIVE ACTIVE Select 1445 1888 Errors 0 0 Paths 4 4 Active 4 4
#datapath query device Total Devices : 2 DEV#: 0 DEVICE NAME: vpath0 TYPE: 2145 POLICY: Optimized SERIAL: 60050768018201BF2800000000000000 ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk3 OPEN NORMAL 0 0 1 fscsi1/hdisk7 OPEN NORMAL 972 0 DEV#: 1 DEVICE NAME: vpath1 TYPE: 2145 POLICY: Optimized SERIAL: 60050768018201BF2800000000000002 ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk4 OPEN NORMAL 784 0 1 fscsi1/hdisk8 OPEN NORMAL 0 0 Write-through mode: During a software upgrade, there are periods when not all of the nodes in the system are operational and as a result, the cache operates in write-through mode. Note that write-through mode has an effect on the throughput, latency, and bandwidth aspects of performance. Verify that your uninterruptible power supply unit configuration is also set up correctly (even if your system is running without problems). Specifically, make sure that the following conditions are true: Your uninterruptible power supply units are all getting their power from an external source, and they are not daisy chained. Make sure that each uninterruptible power supply unit is not supplying power to another nodes uninterruptible power supply unit. The power cable and the serial cable, which comes from each node, go back to the same uninterruptible power supply unit. If the cables are crossed and go back to separate uninterruptible power supply units, then during the upgrade, while one node is shut down, another node might also mistakenly be shut down. Important: Do not share the SVC uninterruptible power supply unit with any other devices. You must also ensure that all I/O paths are working for each host that runs I/O operations to the SAN during the software upgrade. You can check the I/O paths by using the datapath query commands.
606
You do not need to check for hosts that have no active I/O operations to the SAN during the software upgrade.
Procedure
To upgrade the SVC system software, perform the following steps: 1. Before starting the upgrade, you must back up the configuration (see 9.16, Backing up the SVC system configuration on page 621) and save the backup config file in a safe place. 2. Before starting to transfer the sw code to the clustered system clear previously uploaded upgrade files in the /home/admin/upgrade SVC system directory as shown in Example 9-196.
Example 9-196 IBM_2145:ITSO_SVC1:admin>cleardumps -prefix /home/admin/upgrade IBM_2145:ITSO_SVC1:admin>
3. Save the data collection for support diagnosis in case of problems, as shown in Example 9-197.
Example 9-197 svc_snap -c command IBM_2145:ITSO_SVC1:admin>svc_snap -c Collecting system information... Creating Config Backup Dumping error log... Creating Snap data collected in /dumps/snap.110711.111003.111031.tgz
4. List the dump that was generated by the previous command, as shown in Example 9-198.
Example 9-198 lsdumps command IBM_2145:ITSO_SVC1:admin>lsdumps id filename 0 svc.config.cron.bak_108283 1 sel.110711.trc 2 rtc.race_mq_log.txt.110711.trc 3 ethernet.110711.trc 4 svc.config.cron.bak_110711 5 svc.config.cron.xml_110711 6 svc.config.cron.log_110711 7 svc.config.cron.sh_110711 8 svc.config.backup.bak_110711 9 svc.config.backup.tmp.xml 10 110711.trc 11 svc.config.backup.xml_110711 12 svc.config.backup.now.xml 13 snap.110711.111003.111031.tgz
5. Save the generated dump in a safe place using the pscp command, as shown in Example 9-199 on page 608. Note: The pscp command will not work if you have not uploaded your PuTTy SSH private key or if you are not using userid and password into the PuTTy pageant agent as shown in Figure 9-14.
607
Figure 9-14 Pageant example Example 9-199 pscp -load command C:\Program Files (x86)\PuTTY>pscp -load ITSO_SVC1 admin@10.18.228.173:/dumps/snap .110711.111003.111031.tgz c:snap.110711.111003.111031.tgz snap.110711.111003.111031 | 4999 kB | 4999.8 kB/s | ETA: 00:00:00 | 100%
6. Upload the new software package using PuTTY Secure Copy. Enter the command as shown in Example 9-200.
Example 9-200 pscp -load command
C:\Program Files (x86)\PuTTY>pscp -load ITSO_SVC1 c:\IBM2145_INSTALL_6.3.0.0. admin@10.18.228.81:/home/admin/upgrade 110926.tgz.gpg | 353712 kB | 11053.5 kB/s | ETA: 00:00:00 | 100% c. Upload the SAN Volume Controller Software Upgrade Test Utility by using PuTTY Secure Copy. Enter the command as shown in Example 9-201.
Example 9-201 Upload utility
C:\>pscp -load ITSO_SVC1 IBM2145_INSTALL_svcupgradetest_6.1 admin@10.18.229.81:/home/admin/upgrade IBM2145_INSTALL_svcupgrad | 11 kB | 12.0 kB/s | ETA: 00:00:00 | 100% 7. Verify that the packages were successfully delivered through the PuTTY command-line application by entering the lsdumps command, as shown in Example 9-202.
Example 9-202 lsdumps command
IBM_2145:ITSO_SVC1:admin>lsdumps -prefix /home/admin/upgrade id filename 0 IBM2145_INSTALL_6.3.0.0. 1 IBM2145_INSTALL_svcupgradetest_6.1 8. Now that the packages are uploaded, install the SAN Volume Controller Software Upgrade Test Utility, as shown in Example 9-203 on page 609.
608
IBM_2145:ITSO_SVC1:admin>applysoftware -file IBM2145_INSTALL_svcupgradetest_6.1 CMMVC6227I The package installed successfully. 9. Using the following command, test the upgrade for known issues that might prevent a software upgrade from completing successfully, as shown in Example 9-204.
Example 9-204 svcupgradetest command
IBM_2145:ITSO_SVC1:admin>svcupgradetest -v 6.3.0.0 svcupgradetest version 6.1 Please wait while the tool tests for issues that may prevent a software upgrade from completing successfully. The test will take approximately one minute to complete. The test has not found any problems with the 2145 cluster. Please proceed with the software upgrade. Important: If the svcupgradetest command produces any errors, troubleshoot the errors using the maintenance procedures before continuing. 10.Use the applysoftware command to apply the software upgrade, as shown in Example 9-205.
Example 9-205 Apply upgrade command example
IBM_2145:ITSO_SVC1:admin>applysoftware -file IBM2145_INSTALL_6.3.0.0 While the upgrade runs, you can check the status as shown in Example 9-206.
Example 9-206 Check update status
IBM_2145:ITSO_SVC1:admin>lssoftwareupgradestatus status upgrading 11.The new code is distributed and applied to each node in the SVC system. After installation, each node is automatically restarted one at a time. If a node does not restart automatically during the upgrade, you must repair it manually. 12.Eventually both nodes display Cluster: on line one on the SVC front panel and the name of your system on line two of the panel. Be prepared for a wait (in our case, we waited approximately 40 minutes). Performance: During this process, both your CLI and GUI vary from sluggish (slow) to unresponsive. The important thing is that I/O to the hosts can continue through this process. 13.To verify that the upgrade was successful, you can perform either of the following options: You can run the lssystem and lsnodevpd commands as shown in Example 9-207. (We truncated the lssystem and lsnodevpd information for this example.)
Example 9-207 lssystem and lsnodevpd commands
609
cluster_locale en_US time_zone 520 US/Pacific code_level 6.3.0.0 (build 54.0.1109090000) console_IP 10.18.228.81:443 id_alias 000002006BE04FC4 gm_link_tolerance 200 gm_inter_cluster_delay_simulation 20 gm_intra_cluster_delay_simulation 40 gm_max_host_delay 5 .
.
tier_capacity 766.50GB tier_free_capacity 736.50GB has_nas_key no layer appliance IBM_2145:ITSO_SVC1:admin>lsnodevpd 1 id 1 system board: 23 fields part_number 31P1090 .
.
software: 4 fields id 1 node_name SVC1N1 WWNN 0x50050768010027e2 code_level 6.3.0.0 (build 54.0.1109090000) Or you can check whether the code installation has completed without error by copying the log to your management workstation as explained in 9.15.2, Running maintenance procedures on page 610. Open the event log in WordPad and search for the Software Install completed. message. At this point you have completed the required tasks to upgrade the SVC software.
IBM_2145:ITSO_SVC1:admin>dumperrlog
610
This command generates a errlog_timestamp file, such as errlog_110711_111003_090500, where: errlog is part of the default prefix for all event log files. 110711 is the panel name of the current configuration node. 111003 is the date (YYMMDD). 090500 is the time (HHMMSS). You can add the -prefix parameter to your command to change the default prefix of errlog to something else (Example 9-209).
Example 9-209 dumperrlog -prefix command
IBM_2145:ITSO_SVC1:admin>dumperrlog -prefix ITSO_SVC1_errlog This command creates a file called ITSO-SVC4_errlog_timestamp. To see the file name, enter the following command (Example 9-210).
Example 9-210 lsdumps command
IBM_2145:ITSO_SVC1:admin>lsdumps -prefix /dumps/elogs id filename 0 errlog_110711_111003_111056 1 testerrorlog_110711_111003_135358 2 ITSO_SVC1_errlog_110711_111003_141111 Maximum number of event log dump files: A maximum of ten event log dump files per node will be kept on the system. When the eleventh dump is made, the oldest existing dump file for that node will be overwritten. Note that the directory might also hold log files retrieved from other nodes. These files are not counted. The SVC will delete the oldest file (when necessary) for this node to maintain the maximum number of files. The SVC will not delete files from other nodes unless you issue the cleardumps command. After you generate your event log you can issue the finderr command to scan the event log for any unfixed events, as shown in Example 9-211.
Example 9-211 finderr command
IBM_2145:ITSO_SVC1:admin>finderr Highest priority unfixed error code is [1550] As you can see, we have one unfixed event on our system. To analyze this event, download it onto your PC. To know more about this unfixed event, look at the event log in more detail. Use the PuTTY Secure Copy process to copy the file from the system to your local management workstation, as shown in Example 9-212.
Example 9-212 pscp command: Copy event logs off of the SVC
In W2K3 Start Run cmd C:\Program Files (x86)\PuTTY>pscp -load ITSO_SVC1 admin@10.18.228.81:/dumps/elog s/ITSO_SVC1_errlog_110711_111003_141111 c:\ITSO_SVC1_errlog_110711_111003_141111 ITSO_SVC1_errlog_110711_1 | 6 kB | 6.8 kB/s | ETA: 00:00:00 | 100%
611
To use the Run option, you must know where your pscp.exe file is located. In this case, it is in the C:\Program Files\PuTTY\ folder. This command copies the file called ITSO_SVC1_errlog_110711_111003_141111 to the C:\ directory on our local workstation and calls the file ITSO_SVC1_errlog_110711_111003_141111 Open the file in WordPad (Notepad does not format the window as well). You will see information similar to that is shown in Example 9-213. (We truncated this list for the purposes of this example.)
Example 9-213 errlog in WordPad
Error Log Entry 0 Node Identifier Object Type Object ID Copy ID Sequence Number Root Sequence Number First Error Timestamp Last Error Timestamp Error Count Error ID Error Code Status Flag Type Flag 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
: : : : : : : : : : : : : : : 00 00 00 00 00 00 00 00
SVC1N2 node 2 101 101 Mon Oct 3 10:50:13 2011 Epoch + 1317664213 Mon Oct 3 10:50:13 2011 Epoch + 1317664213 1 980221 : Error log cleared SNMP trap raised INFORMATION 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
By scrolling through, or searching for the term unfixed, you can find more detail about the problem. You might see more entries in the errorlog that have the status of unfixed. After rectifying the problem, you can mark the event as fixed in the log by issuing the cherrstate command against its sequence number; see Example 9-214.
Example 9-214 cherrstate command
612
If you accidentally mark the wrong event as fixed, you can mark it as unfixed again by entering the same command and appending the -unfix flag to the end, as shown in Example 9-215.
Example 9-215 unfix flag
IBM_2145:ITSO_SVC1:admin>mksnmpserver -error on -warning on -info on 9.43.86.160 -community SVC SNMP Server id [0] successfully created
-ip
This command sends all events and warning to the SVC community on the SNMP manager with the IP address 9.43.86.160.
IBM_2145:ITSO_SVC1:admin>mksyslogserver -ip 10.64.210.231 -name Syslogserv1 Syslog Server id [0] successfully created When we have configured our syslog server, we can display the current syslog server configurations in our system, as shown in Example 9-218.
Example 9-218 lssyslogserver command
IBM_2145:ITSO_SVC1:admin>lssyslogserver id name IP_address facility error warning info 0 Syslogserv1 10.64.210.231 0 on on on 1 Syslogserv1 10.64.210.231 on on on
613
IBM_2145:ITSO_SVC1:admin>mkemailserver -ip 192.168.1.1 Email Server id [0] successfully created IBM_2145:ITSO_SVC1:admin>lsemailserver 0 id 0 name emailserver0 IP_address 192.168.1.1 port 25 We can configure an email user that will receive email notifications from the SVC system. We can define 12 users to receive emails from our SVC. Using the lsemailuser command, we can verify who is already registered and what type of information is sent to that user, as shown in Example 9-220.
Example 9-220 lsemailuser command
IBM_2145:ITSO_SVC1:admin>lsemailuser id name address user_type error warning info 0 IBM_Support_Center callhome0@de.ibm.com support on off off
inventory on
We can also create a new user, as shown in Example 9-221 for a SAN administrator.
Example 9-221 mkemailuser command
IBM_2145:ITSO_SVC1:admin>mkemailuser -address SANadmin@ibm.com -error on -warning on -info on -inventory on User, id [0], successfully created
Critical: Events which put the node into service state and prevent the node from joining the system 500 699 Note: Deleting a node from a system will cause nodes to enter service state as well. Non-critical: Partial hardware faults (for example, one PSU failed in 2145-CF8) 800 - 899 To display the event log use the lseventlog command, as shown in Example 9-222 on page 616. IBM_2145:ITSO_SVC1:admin>lseventlog -count 2 sequence_number last_timestamp object_type object_id object_name copy_id status fixed event_id error_code description 102 111003105018 cluster ITSO_SVC1 message no 981004 FC discovery occurred, no configuration changes were detected 103 111003111036 cluster ITSO_SVC1 message no 981004 FC discovery occurred, no configuration changes were detected IBM_2145:ITSO_SVC1:admin>lseventlog 103 sequence_number 103 first_timestamp 111003111036 first_timestamp_epoch 1317665436 last_timestamp 111003111036 last_timestamp_epoch 1317665436 object_type cluster object_id object_name ITSO_SVC1 copy_id reporting_node_id 1 reporting_node_name SVC1N1 root_sequence_number event_count 1 status message fixed no auto_fixed no notification_type informational event_id 981004 event_id_text FC discovery occurred, no error_code error_code_text sense1 01 01 00 00 7E 0B 00 00 04 02 00 sense2 00 00 00 00 10 00 00 00 08 00 08 sense3 00 00 00 00 00 00 00 00 F2 FF 01 sense4 0E 00 00 00 FC FF FF FF 03 00 00 sense5 00 00 06 00 00 00 00 00 00 00 00 sense6 00 00 00 00 03 00 00 00 00 00 00 sense7 00 00 00 00 00 00 00 00 00 00 00 sense8 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00
01 00 00 07 00 00 00 00
00 00 00 00 00 00 00 00
01 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00
These commands allow you to view the last events (you can specify -count parameter to define how many event you need to display) that were generated. Use the method described
615
in 9.15.2, Running maintenance procedures on page 610 to upload and analyze the event log in more detail. To clear the event log, you can issue the clearerrlog command, as shown in Example 9-222.
Example 9-222 clearerrlog command
Using the -force flag will stop any confirmation requests from appearing. When executed, this command will clear all of the entries from the event log. This process will proceed even if there are unfixed errors in the log. It also clears any status events that are in the log. Note: This command is a destructive command for the event log. Only use this command when you have either rebuilt the system, or when you have fixed a major problem that has caused many entries in the event log that you do not want to fix manually.
IBM_2145:ITSO_SVC1:admin>lslicense used_flash 0.00 used_remote 0.03 used_virtualization 0.75 license_flash 500 license_remote 500 license_virtualization 500 license_physical_disks 0 license_physical_flash off license_physical_remote off The current license settings for the system are displayed in the viewing license settings log window. These settings show whether you are licensed to use the FlashCopy, Metro Mirror, Global Mirror, or Virtualization features. They also show the storage capacity that is licensed for virtualization. Typically, the license settings log contains entries, because feature options must be set as part of the web-based system creation process. Consider, for example, that you have purchased an additional 5 TB of licensing for the Metro Mirror and Global Mirror feature from your actual 20 TB license. Example 9-224 shows the command that you enter.
Example 9-224 chlicense command
IBM_2145:ITSO_SVC1:admin>chlicense -remote 25 To turn a feature off, add 0 TB as the capacity for the feature that you want to disable.
616
To verify that the changes you have made are reflected in your SVC configuration, you can issue the lslicense command as before (see Example 9-225).
Example 9-225 lslicense command: Verifying changes
IBM_2145:ITSO_SVC1:admin>lslicense used_flash 0.00 used_remote 0.03 used_virtualization 0.75 license_flash 500 license_remote 25 license_virtualization 500 license_physical_disks 0 license_physical_flash off license_physical_remote off
617
The lsdumps command with -prefix /dumps/feature lists all of the dumps in the /dumps/feature directory (Example 9-227).
Example 9-227 lsdumps with -prefix /dumps/feature command
IBM_2145:ITSO_SVC1:admin>lsdumps -prefix /dumps/iostats id filename 0 Nm_stats_110711_111003_125706 1 Nn_stats_110711_111003_125706 2 Nv_stats_110711_111003_125706 3 Nd_stats_110711_111003_125706 4 Nv_stats_110711_111003_131204 5 Nd_stats_110711_111003_131204
618
6 Nn_stats_110711_111003_131204 ........
Software dump
The lsdumps command lists the contents of the /dumps directory. In this directory are copied general debug information, software, application dumps and livedumps. Example 9-230 shows the command.
Example 9-230 lsdumps command without prefix
IBM_2145:ITSO_SVC1:admin>lsdumps id filename 0 svc.config.cron.bak_108283 1 sel.110711.trc 2 rtc.race_mq_log.txt.110711.trc 3 ethernet.110711.trc 4 svc.config.cron.bak_110711 5 svc.config.cron.xml_110711 6 svc.config.cron.log_110711 7 svc.config.cron.sh_110711 8 svc.config.backup.bak_110711 9 svc.config.backup.tmp.xml 10 110711.trc 11 svc.config.backup.xml_110711 12 svc.config.backup.now.xml 13 snap.110711.111003.111031.tgz
619
Wildcards: The following rules apply to the use of wildcards with the SAN Volume Controller CLI: The wildcard character is an asterisk (*). The command can contain a maximum of one wildcard. When you use a wildcard, you must surround the filter entry with double quotation marks (""), for example: >cleardumps -prefix "/dumps/elogs/*.txt" Example 9-231 shows an example of the cpdumps command.
Example 9-231 cpdumps command
IBM_2145:ITSO_SVC1:admin>cpdumps -prefix /dumps/configs n4 Now that you have copied the configuration dump file from Node n4 to your configuration node, you can use PuTTY Secure Copy to copy the file to your management workstation for further analysis. To clear the dumps, you can run the cleardumps command. Again, you can append the node name if you want to clear dumps off of a node other than the current configuration node (the default for the cleardumps command). The commands in Example 9-232 clear all logs or dumps from the SVC Node SVC1N2.
Example 9-232 cleardumps command
/dumps SVC1N2 /dumps/iostats SVC1N2 /dumps/iotrace SVC1N2 /dumps/feature SVC1N2 /dumps/config SVC1N2 /dumps/elog SVC1N2 /home/admin/upgrade SVC1N2
620
9.16.1 Prerequisites
You must have the following prerequisites in place: All nodes must be online. No object name can begin with an underscore. All objects must have non-default names, that is, names that are not assigned by the SVC. Although we advise that objects have non-default names at the time that the backup is taken, this prerequisite is not mandatory. Objects with default names are renamed when they are restored. Example 9-233 shows an example of the svcconfig backup command.
Example 9-233 svcconfig backup command
IBM_2145:ITSO_SVC1:admin>svcconfig backup .................. CMMVC6130W Cluster ITSO_SVC4 with inter-cluster partnership fully_configured will not be restored
621
.................................................................................. ....... CMMVC6155I SVCCONFIG processing completed successfully As you can see in Example 9-233 on page 621, we received a CMMVC6130W Cluster ITSO_SVC4 with inter-cluster partnership fully_configured will not be restored message. This message indicates that individual systems in a multi system environment will need to be backed-up individually. In the event that recovery is required, recovery will only be performed on the system where the recovery commands are executed. Example 9-234 shows the pscp command.
Example 9-234 pscp command
C:\Program Files\PuTTY>pscp -load ITSO_SVC1 admin@10.18.229.81:/tmp/svc.config.backup.xml c:\temp\clibackup.xml clibackup.xml | 97 kB | 97.2 kB/s | ETA: 00:00:00 | 100% The following scenario illustrates the value of configuration backup: 1. Use the svcconfig command to create a backup file on the clustered system that contains details about the current system configuration. 2. Store the backup configuration on a form of tertiary storage. You must copy the backup file from the clustered system or it becomes lost if the system crashes. 3. If a sufficiently severe failure occurs, the system might be lost. Both the configuration data (for example, the system definitions of hosts, I/O Groups, MDGs, and MDisks) and the application data on the virtualized disks are lost. In this scenario, it is assumed that the application data can be restored from normal client backup procedures. However, before you can perform this restoration, you must reinstate the system as it was configured at the time of the failure. Therefore, you restore the same MDGs, I/O Groups, host definitions, and volumes that existed prior to the failure. Then you can copy the application data back onto these volumes and resume operations. 4. Recover the hardware: hosts, SVCs, disk controller systems, disks, and SAN fabric. The hardware and SAN fabric must physically be the same as the hardware and SAN fabric that were used before the failure. 5. Re-initialize the clustered system with the configuration node; the other nodes will be recovered when restoring the configuration. 6. Restore your clustered system configuration using the backup configuration file that was generated prior to the failure. 7. Restore the data on your volumes using your preferred restoration solution or with help from IBM Service. 8. Resume normal operations.
622
After the svcconfig restore -execute command is started, consider any prior user data on the volumes destroyed. The user data must be recovered through your usual application data backup and restore process. See IBM TotalStorage Open Software Family SAN Volume Controller: Command-Line Interface Users Guide, GC27-2287, for more information about this topic. For a detailed description of the SVC configuration backup and restore functions, see IBM TotalStorage Open Software Family SAN Volume Controller: Configuration Guide, GC27-2286.
623
IBM_2145:ITSO_SVC1:admin>lsquorum quorum_index status id name controller_id override 0 online 1 mdisk1 2 1 online 0 mdisk0 2 2 online 3 mdisk3 2 IBM_2145:ITSO_SVC1:admin>lsquorum 1 quorum_index 1 status online id 0 name mdisk0 controller_id 2 controller_name ITSO-DS3500 active yes object_type mdisk override no
controller_name active object_type ITSO-DS3500 ITSO-DS3500 ITSO-DS3500 no yes no mdisk mdisk mdisk no no no
IBM_2145:ITSO_SVC1:admin>lsquorum quorum_index status id name controller_id override 0 online 1 mdisk1 2 1 online 0 mdisk0 2 2 online 3 mdisk3 2
controller_name active object_type ITSO-DS3500 ITSO-DS3500 ITSO-DS3500 no yes no mdisk mdisk mdisk no no no
chquorum -mdisk 9 2
IBM_2145:ITSO_SVC1:admin>lsquorum quorum_index status id name controller_id controller_name active object_type override 0 online 1 mdisk1 2 ITSO-DS3500 no mdisk no 1 online 0 mdisk0 2 ITSO-DS3500 yes mdisk no
624
online 9
mdisk9 3
ITSO-DS5000
no
mdisk
no
As you can see in Example 9-237 on page 624, the quorum index 2 has been moved from MDisk3 on ITSO-DS3500 controller to MDisk9 on ITSO-DS5000 controller.
625
Example 9-238 shows the two new set of commands introduced with Service Assistant.
Example 9-238 sainfo and satask command
IBM_2145:ITSO_SVC1:admin>sainfo -h The following actions are available with this command : lscmdstatus lsfiles lsservicenodes lsservicerecommendation lsservicestatus IBM_2145:ITSO_SVC1:admin>satask -h The following actions are available with this command : chenclosurevpd chnodeled chserviceip chwwnn cpfiles installsoftware leavecluster mkcluster rescuenode setlocale setpacedccu
626
settempsshkey snap startservice stopnode stopservice t3recovery Attention: The sainfo and satask command set usage must be performed under IBM Support direction. Incorrect use of those commands can lead to unexpected results.
IBM_2145:ITSO_SVC1:admin>lsfabric remote_wwpn remote_nportid id node_name local_wwpn local_nportid state name cluster_name type 5005076801405034 030A00 1 SVC1N1 50050768014027E2 active SVC1N2 ITSO_SVC1 node 5005076801405034 030A00 1 SVC1N1 50050768011027E2 active SVC1N2 ITSO_SVC1 node 5005076801305034 040A00 1 SVC1N1 50050768013027E2 active SVC1N2 ITSO_SVC1 node 5005076801305034 040A00 1 SVC1N1 50050768012027E2 active SVC1N2 ITSO_SVC1 node 50050768012027E2 040900 2 SVC1N2 5005076801305034 active SVC1N1 ITSO_SVC1 node 50050768012027E2 040900 2 SVC1N2 5005076801205034 active SVC1N1 ITSO_SVC1 node 500507680120505C 040F00 1 SVC1N1 50050768013027E2 active SVC4N2 ITSO_SVC4 node 500507680120505C 040F00 1 SVC1N1 50050768012027E2 active SVC4N2 ITSO_SVC4 node 500507680120505C 040F00 2 SVC1N2 5005076801305034 active SVC4N2 ITSO_SVC4 node 500507680120505C 040F00 2 SVC1N2 5005076801205034 active SVC4N2 ITSO_SVC4 node 50050768013027E2 040800 2 SVC1N2 5005076801305034 active SVC1N1 ITSO_SVC1 node ....
local_port 1 3 2 4 2 4 2 4 2 4 2 030800 030900 040800 040900 040A00 040B00 040800 040900 040A00 040B00 040A00
627
Above and below rows has been removed for brevity .... 20690080E51B09E8 041900 1 SVC1N1 50050768013027E2 inactive ITSO-DS3500 controller 20690080E51B09E8 041900 1 SVC1N1 50050768012027E2 inactive ITSO-DS3500 controller 20690080E51B09E8 041900 2 SVC1N2 5005076801305034 inactive ITSO-DS3500 controller 20690080E51B09E8 041900 2 SVC1N2 5005076801205034 inactive ITSO-DS3500 controller 50050768013037DC 041400 1 SVC1N1 50050768013027E2 active ITSOSVC3N1 ITSO_SVC3 node 50050768013037DC 041400 1 SVC1N1 50050768012027E2 active ITSOSVC3N1 ITSO_SVC3 node 50050768013037DC 041400 2 SVC1N2 5005076801305034 active ITSOSVC3N1 ITSO_SVC3 node 50050768013037DC 041400 2 SVC1N2 5005076801205034 active ITSOSVC3N1 ITSO_SVC3 node 5005076801101D1C 031500 1 SVC1N1 50050768014027E2 active ITSOSVC3N2 ITSO_SVC3 node 5005076801101D1C 031500 1 SVC1N1 50050768011027E2 active ITSOSVC3N2 ITSO_SVC3 node 5005076801101D1C 031500 2 SVC1N2 5005076801405034 active ITSOSVC3N2 ITSO_SVC3 node ..... Above and below rows has been removed for brevity ..... 5005076801201D22 021300 1 SVC1N1 50050768013027E2 active SVC2N2 ITSO_SVC2 node 5005076801201D22 021300 1 SVC1N1 50050768012027E2 active SVC2N2 ITSO_SVC2 node 5005076801201D22 021300 2 SVC1N2 5005076801305034 active SVC2N2 ITSO_SVC2 node 5005076801201D22 021300 2 SVC1N2 5005076801205034 active SVC2N2 ITSO_SVC2 node 50050768011037DC 011513 1 SVC1N1 50050768014027E2 active ITSOSVC3N1 ITSO_SVC3 node 50050768011037DC 011513 1 SVC1N1 50050768011027E2 active ITSOSVC3N1 ITSO_SVC3 node 50050768011037DC 011513 2 SVC1N2 5005076801405034 active ITSOSVC3N1 ITSO_SVC3 node 50050768011037DC 011513 2 SVC1N2 5005076801105034 active ITSOSVC3N1 ITSO_SVC3 node 5005076801301D22 021200 1 SVC1N1 50050768013027E2 active SVC2N2 ITSO_SVC2 node 5005076801301D22 021200 1 SVC1N1 50050768012027E2 active SVC2N2 ITSO_SVC2 node .... Above and below rows has been removed for brevity ....
2 4 2 4 2 4 2 4 1 3 1
040800 040900 040A00 040B00 040800 040900 040A00 040B00 030800 030900 030A00
2 4 2 4 1 3 1 3 2 4
040800 040900 040A00 040B00 030800 030900 030A00 030B00 040800 040900
For more detail about the lsfabric command, see IBM System Storage SAN Volume Controller and Storwize V7000 Command-Line Interface User's Guide Version 6.3.0 GC27-2287.
628
629
630
10
Chapter 10.
631
From this Home panel, on the left panel, there is a dynamic menu.
Dynamic menu
This new version of the SVC GUI includes a new dynamic menu located in the left column of the window. To navigate using this menu, move the mouse over the various icons and choose a page that you want to display, as shown in Figure 10-2 on page 633.
632
A non-dynamic version of this menu exists for slow connections. To access the non-dynamic menu, select Low graphics mode as shown in Figure 10-3.
Figure 10-4 on page 634 shows the non-dynamic version of the menu.
633
In this case, in the upper part of the page there is a pull down menu for navigating between submenus. For example, in Figure 10-4, Volumes, Volumes by Pool, and Volumes by Host are submenus (pull down menus) for the Volumes menu.
If there are issues on your cluster nodes, external storage, or remote partnerships, you will be informed here, as shown in Figure 10-7.
634
You will be able to fix the error by clicking on the Status Alert Bar, which will direct you to the troubleshooting panel.
The following information is displayed in this window. To view all of them, you need to use the up and down arrows: Allocated Capacity Free Capacity Physical Capacity Virtual Capacity Over-allocation
By clicking within the square, as shown in Figure 10-9, it also provides information about recently completed tasks, as shown in Figure 10-10.
635
Table filtering
In most pages, in the upper right corner of the window, there is a search field to filter the elements, which is useful if the list of entries is too large to work with. Perform these steps to use search filtering: 1. Enter a value in the search box in the upper right corner of the window, as shown in Figure 10-11 on page 636.
636
2. Click the
3. This function enables you to filter your table based on the column names. In this example, a volume list is displayed containing names that include ESX somewhere in the name highlighted by the amber colour as shown in Figure 10-12. Please note, the search option are not case sensitive.
4. You can remove this filtered view by clicking Reset, as shown in Figure 10-13 on page 637.
Table information
With SVC 6.3, you are able to add or remove additional information in the tables available on most pages. As an example, in the Volumes page we will add a column to our table. 1. Right-click the top part of the table, at the empty table; see Figure 10-14. A menu with all available columns appears.
637
2. Select the column that you want to add (or remove) from this table. In our example, we added the volume ID column as shown in Figure 10-15 on page 638.
3. You can repeat this process several times to create custom tables that meet your requirements.
638
Sorting
Regardless of whether you use filter options, you can sort the displayed data by clicking one column's table as shown in Figure 10-17. In this example, we sort the table by volume ID.
639
After we click the volume ID column, the table is sorted by volume ID as shown in Figure 10-18 on page 640.
640
Note: By repeatedly clicking a column, you can sort this table based on that column in ascending or descending order.
10.1.3 Help
To access online help, click the Help link in the upper right corner of any panel, as shown in Figure 10-19.
This action opens a new window where you can find help on different topics (see Figure 10-20).
641
642
2. Type the new name that you want to assign to the controller, and press Enter as shown in Figure 10-23.
Controller name: The name can consist of the letters A to Z and a to z, the numbers 0 to 9, the dash (-), and the underscore (_) character. The name can be between one and 63 characters in length. However, the name cannot start with a number, the dash or the underscore. 3. A task is launched to change the name of this Storage System. When it is completed, you can close this window. 4. The new name of your controller is displayed on the Disk Controller Systems panel.
3. The Discover devices task runs. 4. When the task is completed, click Close and see the new MDisks available.
643
You can add information (new columns) to the table, as explained in Table information on page 637. To retrieve more detailed information about a specific Storage Pool, select any Storage Pool in the left column. The top right corner of the panel, shown in Figure 10-26, contains the following information about this pool: Status Number of MDisks Number of volumes copies If Easy Tiering is active on this pool Volume Allocation Used Capacity Capacity
Change the view to Pools by Mdisks. Select the Pool that you want to work with, and click on the + (expand button). This panel displays the MDisks that are present in this Storage Pool, as shown in Figure 10-27. 644
IBM System Storage SAN Volume Controller V6.3
3. The Discover Device window is displayed. 4. Click Close to see the newly discovered MDisks.
2. The wizard Create Storage Pools opens. 3. On this first page, complete the following elements as shown in Figure 10-30 on page 646: a. You can specify a name for the Storage Pool as we have in Figure 10-30 on page 646. If you do not provide a name, the SVC automatically generates the name mdiskgrpx, where x is the ID sequence number that is assigned by the SVC internally.
Chapter 10. SAN Volume Controller operations using the GUI
645
Storage Pool name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The name can be between one and 63 characters in length and is case sensitive, but it cannot start with a number or the word MDiskgrp because this prefix is reserved for SVC assignment only. b. You can also change the icon associated with this Storage Pool as shown in Figure 10-30 on page 646. c. If you expand the Advanced Settings box, you can specify: The Extent Size (by default at 256 MB) The Warning threshold to send a warning to the event log when the capacity is first exceeded (by default at 80%).
d. Click Next.
4. On this page (Figure 10-31), you are able to detect new MDisks by using Detect MDisks. For more information about this topic, see 10.4.3, Discovering MDisks on page 653. a. Select the MDisks that you want to add to this Storage Pool. Tip: To add multiple MDisks, hold down Ctrl and use your mouse to select the entries you want to add. b. Click Finish to complete the creation.
646
5. In the Storage Pools panel (Figure 10-32 on page 647), the new Storage Pool is displayed.
At this point, you have completed the tasks that are required to create a Storage Pool.
647
2. Type the new name that you want to assign to the Storage Pool and press Enter (Figure 10-34).
Storage Pools name: The name can consist of the letters A to Z and a to z, the numbers 0 to 9, the dash (-), and the underscore (_) character. The name can be between one and 63 characters in length. However, the name cannot start with a number, the dash or the underscore.
3. A task is launched to change the name of this pool. When it is completed, you can close this window. 4. From the Storage Pools panel, the new Storage Pool name is displayed.
648
2. In the Delete Pool window, click Delete to confirm that you want to delete the Storage Pool (Figure 10-36 on page 649). If there are MDisks and volumes within the Storage Pool that you are deleting, you must select the Delete all volumes, host mappings, and MDisks that are associated with this pool. option.
649
Attention: If you delete a Storage Pool by using the Delete all volumes, host mappings, and MDisks that are associated with this pool option, and volumes were associated with that Storage Pool, you will lose the data on your volumes because they are deleted before the Storage Pool. If you want to save your data, then migrate or mirror the volumes to another Storage Pool before you delete the Storage Pool previously assigned to the volumes.
10.3.7 Showing the volumes that are associated with a Storage Pool
To show the volumes that are associated with a Storage Pool, click volumes and then click volumes by Pool. For more information about this feature see 10.7, Working with volumes on page 679.
To retrieve more detailed information about a specific MDisk, perform the following steps: 1. In the MDisks panel, from the expanded view of an Pool (Figure 10-37), right-click an MDisk. 2. As shown in Figure 10-38, click Properties 3. Alternate you can select Actions from the Menu on top of the Mdisks by Pool view, and select Properties for the selected Mdisk.
650
4. For the selected MDisk, an overview is displayed showing its various parameters and dependent volumes; see Figure 10-39 on page 651.
Note: To obtain all information about the MDisk, select Show Details as shown in Figure 10-39.
Figure 10-39 MDisk Details page Chapter 10. SAN Volume Controller operations using the GUI
651
5. Clicking Dependent Volumes displays information about volumes that reside on this MDisk, as shown in Figure 10-40. The volume panel is discussed in more detail in 10.7, Working with volumes on page 679.
652
Note: You can also right-click this MDisk as shown in Figure 10-38 on page 651 and select Rename from the list. 4. In the Rename MDisk window (Figure 10-42), type the new name that you want to assign to the MDisk and click Rename.
MDisk name: The name can consist of the letters A to Z and a to z, the numbers 0 to 9, the dash (-), and the underscore (_) character. The name can be between one and 63 characters in length.
653
The Discover Device window is displayed. 3. When the task is completed, click Close. 4. Newly assigned MDisks are displayed in the Not in a Pool as Unmanaged. See Figure 10-44.
Troubleshooting: If your MDisks are still not visible, check that the logical unit numbers (LUNs) from your subsystem are properly assigned to the SVC (for example, using storage partitioning with a DS5000) and that appropriate zoning is in place (for example, the SVC can see the disk subsystem).
654
Note: You can also access the Add to Pool action by right-clicking an unmanaged MDisk. 3. From the Add MDisk to Pool window, select in which pool you want to integrate this MDisk and then click Add to Pool, as shown in Figure 10-46.
Note: You can also access the Remove from Pool action by right-clicking an unmanaged MDisk.
655
3. From the Remove from Pool window (Figure 10-48), you need to validate the number of MDisks that you want to remove from this pool. This verification has been added to secure the process of deleting data. If volumes are using the MDisks that you are removing from the Storage Pool, you must select the option Remove the MDisk from the storage pool even if it has data on it. The system migrates the data to other MDisks in the pool. to confirm the removal of the MDisk. 4. Click Delete as shown in Figure 10-48.
An error message is displayed, as shown in Figure 10-49 on page 656, if there is insufficient space to migrate the volume data to other extents on other MDisks in that Storage Pool.
Network Management Protocol (SNMP) alerts in regard to the state of the hardware (before the disk was excluded) and preventive maintenance that has been undertaken. If not, the hosts that were using volumes, which used the excluded MDisk, now have I/O errors. After you take the necessary corrective action to repair the MDisk (for example, replace the failed disk and repair the SAN zones), you can tell the SVC to include the MDisk again. Perform the following steps to include an excluded MDisk: 1. From the SVC Welcome panel, click Physical Storage in the left menu, and then click the MDisks panel. 2. Select the MDisk that you want to include again. 3. Click Include Excluded MDisk in the Actions menu. Note: You can also include an excluded MDisk by right-clicking an MDisk and selecting Include Excluded MDisk from the list.
Note: For more detailed information about Easy Tier, see Chapter 7, Easy Tier on page 349. Easy Tier is also still inactive (Figure 10-50) for the storage pool because we do not yet have a true multidisk tier pool. To activate the pool we have to set the SSD MDisks to their correct generic_ssd tier. To set an MDisk as ssd on a Storage Pool, perform the following steps: Note: Repeat this action for each of your ssd MDisks. 1. Select the MDisk. 2. Click Select Tier in the Actions menu as shown in Figure 10-51. Note: You can also access the Select Tier action by right-clicking an MDisk.
657
3. In the Select MDisk Tier window, shown in Figure 10-52 on page 658, select Solid-State Drive using the drop-down list and then click OK.
4. The Easy Tier is now activated in this multidisk tier pool (Hard Disk Drive and Solid-State Drive) in this pool as shown in Figure 10-53.
658
10.5 Migration
See Chapter 6, Data migration on page 227 for a comprehensive description of data migration.
659
By using the Host Mapping panel, as shown in Figure 10-56 on page 660
660
Important: Several actions on the hosts are specific to the Ports by Host or the Host Mapping panels, but all these actions and others are accessible from the All Hosts panel. For this reason, all actions on hosts will be executed from the All Hosts panel.
Note: You can also access the Properties action by right-clicking a host.
3. For a given host in the Overview window you will be presented with information as shown in Figure 10-59.
661
Note: To obtain more information about the hosts select Show Details (Figure 10-59). 4. On the Mapped Volumes tab (Figure 10-60), you will see the volumes that are mapped to this host.
5. The Port Definitions tab (Figure 10-61) displays attachment information such as the worldwide port names (WWPNs) that are defined for this host or the iSCSI qualified name (IQN) that are defined for this host.
662
When you are finished viewing the details, click Close to return to the previous window.
3. Select Fibre-Channel Host from the two types of connection available (Figure 10-63).
663
4. In the Creating Hosts window (Figure 10-64 on page 665), type a name for your host (Host Name). Host name: If you do not provide a name, the SVC automatically generates the name hostx (where x is the ID sequence number that is assigned by the SVC internally). If you want to provide a name, you can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The host name can be between one and 63 characters in length. 5. Fibre-Channel Ports Section: Use the drop-down list to select the WWPNs that correspond to your HBA or HBAs and click Add Port to List in the Fibre-Channel Ports window. To add additional ports, repeat this action. Note: If you added a wrong Fibre-Channel port, you can delete it from the list by clicking the red cross. If your WWPNs are not being displayed, click Rescan to rediscover new WWPNs available since the last scan. Note: In certain cases your WWPNs still might not be displayed, even though you are sure that your adapter is functioning (for example, you see the WWPN in the switch name server) and your zones are correctly set up. To rectify this, type the WWPN of your HBA or HBAs into the drop-down list and click Add Port to List. It will be displayed as unverified. 6. Advanced Settings Section: If you need to modify the I/O Group, the Port Mask or the Host Type, you must select Advanced to access these Advanced Settings as shown in Figure 10-64 on page 665. Select one or more I/O groups from which the host can access volumes. By default, all I/O Groups are selected. You can use a port mask to control the node target ports that a host can access. The port mask applies to the logins from the host initiator port that is associated with the host object.
664
Note: For each login between a host bus adapter (HBA) port and a node port, the node examines the port mask that is associated with the host object for which the HBA is a member and determines if access is allowed or denied. If access is denied, the node responds to SCSI commands as though the HBA port is unknown. Select the Host Type. The default type is Generic. Use generic for all hosts, unless you use Hewlett-Packard UNIX (HP-UX) or Sun. For these, select HP_UX (to have more than eight LUNs supported for HP_UX machines) or TPGS for Sun hosts using MPxIO.
7. Click the Create Host button as shown in Figure 10-64. This action brings you back to the All Hosts panel (Figure 10-65 on page 665) where you can see the newly added FC host.
665
iSCSI-attached hosts
To create a new host that uses the iSCSI connection type, perform the following steps: 1. Go to the All Hosts panel from the SVC Welcome panel on Figure 10-1 on page 632 and click Hosts All Hosts (Figure 10-54 on page 659). 2. Click New Host, as shown in Figure 10-66.
3. Select iSCSI Host from the two types of connection (Figure 10-67).
4. In the Creating Hosts window (Figure 10-68 on page 668), type a name for your host (Host Name). Host name: If you do not provide a name, the SVC automatically generates the name hostx (where x is the ID sequence number that is assigned by the SVC internally). If you want to provide a name, you can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore. The host name can be between one and 63 characters in length. 5. iSCSI ports Section: Enter the iSCSI initiator or IQN as an iSCSI port, and then click Add Port to List. This IQN is obtained from the server and generally has the same purpose as the WWPN. To add additional ports, repeat this action. Note: If you add the wrong iSCSI port, you can delete it from the list by clicking the red cross.
666
If needed, select Use CHAP authentication (all ports) and enter the CHAP secret as shown in Figure 10-68 on page 668. The CHAP secret is the authentication method that is used to restrict access for other iSCSI hosts to use the same connection. You can set the CHAP for the whole cluster under cluster properties or for each host definition. The CHAP must be identical on the server and the cluster/host definition. You can create an iSCSI host definition without using a CHAP. 6. Advanced Settings Section: If you need to modify the I/O Group, the Port Mask or the Host Type, you have to select the Advanced button to access these settings as shown in Figure 10-64 on page 665. Select one or more I/O groups from which the host can access volumes. By default, all I/O Groups are selected. You can use a port mask to control the node target ports that a host can access. The port mask applies to the logins from the host initiator port that is associated with the host object. Note: For each login between a host bus adapter (HBA) port and a node port, the node examines the port mask that is associated with the host object for which the HBA is a member and determines if access is allowed or denied. If access is denied, the node responds to SCSI commands as though the HBA port is unknown. Select the Host Type. The default type is Generic. Use generic for all hosts, unless you use Hewlett-Packard UNIX (HP-UX) or Sun. For these, select HP_UX: (to have more than eight LUNs supported for HP_UX machines) or TPGS for Sun hosts using MPxIO.
667
7. Click Create Host as shown in Figure 10-68. This action brings you back to the All Hosts panel (Figure 10-69) where you can see the newly added iSCSI host.
668
Note: There are two other ways to rename a host. You can right-click a host and select Rename from the list, or use the method described in 10.6.4, Modifying a host on page 669. 3. In the Rename Host window, type the new name that you want to assign and click Rename (Figure 10-71).
Host name: If you do not provide a name, the SVC automatically generates the name hostx (where x is the ID sequence number that is assigned by the SVC internally). If you want to provide a name, you can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The host name can be between one and 63 characters in length.
Figure 10-72 Host Properties Chapter 10. SAN Volume Controller operations using the GUI
669
Note: You can also right-click a host and select Properties from the list. 3. In the Overview tab, click Edit to be able to modify parameters for this host. You can modify: The Host Name Host name: If you do not provide a name, the SVC automatically generates the name hostx (where x is the ID sequence number that is assigned by the SVC internally). If you want to provide a name, you can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The host name can be between one and 63 characters in length. The Host Type: The default type is Generic. Use generic for all hosts, unless you use Hewlett-Packard UNIX (HP-UX) or Sun. For these, select HP_UX (to have more than eight LUNs supported for HP_UX machines) or TPGS for Sun hosts using MPxIO. Advanced Settings: If you need to modify the I/O Group, the Port Mask or the iSCSI CHAP Secret (in case you want to convert it to an iSCSI Host), you must select Advanced to access these settings, as shown in Figure 10-73 on page 670.
4. Save the changes by clicking Save. 5. You can close the Host Details window by clicking Close.
Note: You can also right-click a host and select Delete from the list.
3. The Delete Host window opens as shown in Figure 10-75 on page 671. In the field Verify the number of hosts that you are deleting, enter a value matching the correct number of hosts that you want to remove. This verification has been added to secure the process of inadvertently deleting wrong hosts. If you still have volumes associated with the host and if you are sure that you want to delete it even if these volumes are no longer accessible, select the Delete the host even if volumes are mapped to them. These volumes will no longer be accessible to the hosts. option. 4. Click Delete to complete the operation (Figure 10-75).
671
To add a port to a host, perform the following steps: 1. Select the host in the table. 2. Click Properties in the Actions menu (Figure 10-72 on page 669).
Note: You can also right-click a host and select Properties from the list. 3. On the Properties window, click Port Definitions (Figure 10-77).
4. Click Add and select the type of port that you want to add to your host (Fibre Channel Port or iSCSI Port) as shown in Figure 10-78. In this example, we selected a Fibre-Channel Port.
672
5. In the Add Fibre-Channel Ports window (Figure 10-79 on page 673), use the drop-down list to select the WWPNs that correspond to your HBA or HBAs and click Add Port to List in the Fibre-Channel Ports window. To add additional ports, repeat this action. Note: If you added the wrong Fibre-Channel port, you can delete it from the list by clicking the red cross. If your WWPNs are not displayed, click Rescan to rediscover any new WWPNs available since the last scan. Note: In certain cases your WWPNs might still not be displayed, even though you are sure your adapter is functioning (for example, you see the WWPN in the switch name server) and your zones are correctly set up. To rectify this, type the WWPN of your HBA or HBAs into the drop-down list and click Add Port to List. It will be displayed as unverified. 6. To finish, click Add Ports to Host.
673
7. This action takes you back to the Port Definitions window (Figure 10-80), where you can see the newly added ports.
Note: This action is exactly the same for iSCSI Ports, except that you have to add iSCSI ports.
Tip: You can also right-click a host and select Properties from the list.
674
4. Select the port or ports that you want to remove. 5. Click Delete Port (Figure 10-83).
6. In the Delete Port window (Figure 10-84), in the field Verify the number of ports to delete, you need to enter a value matching the correct number of ports that you want to
675
remove. This verification has been added to secure the process of inadvertently deleting the wrong hosts.
7. Click Delete to remove the port or ports. 8. This action brings you back to the Port Definitions window.
3. On the Modify Mappings window select the volume or volumes that you want to map to this host and move each of them to the right table using the right arrow, as shown in Figure 10-86. If you need to remove them, use the left arrow.
676
In the right table you can edit the SCSI ID by selecting a mapping that is highlighted in yellow, indicating that the mapping is new. Click Edit SCSI ID (Figure 10-86). Note: Only new mappings can have their SCSI ID changed. To edit an existing mapping SCSI ID, you must unmap the volume and recreate the map to the volume. In the Edit SCSI ID window, change the SCSI ID then click OK (Figure 10-87 on page 677).
4. After all the volumes you wanted to map to this host have been added, click OK to create the Host mapping relationships.
677
Tip: You can also right-click a host and select Modify Mappings from the list. 3. Select the host mapping or mappings that you want to remove. 4. Click on the arrow in the middle when you have selected the volumes that you want to remove, and then click on the Apply or Map Volumes button to complete the Modify Mapping actions.(Figure 10-89)
678
Tip: You can also right-click a host and select Unmap All volumes from the list.
From the Unmap from Host window (Figure 10-91 on page 679), in the Verify the number of mappings that this operation affects: field, enter a value matching the correct number of ports that you want to remove. This verification has been added to secure the process of inadvertently deleting the wrong hosts.
3. Click Unmap to remove the host mapping or mappings. This action brings you back to the All Hosts window.
679
Or you can use the Volumes by Pool panel, as shown in Figure 10-93 on page 680.
Or you can use the Volumes by Host panel, as shown in Figure 10-94.
680
Important: Several actions on the hosts are specific to the Volumes by Pool or to the Volumes by Host panels. However, all these actions and others are accessible from the All volumes panel. All actions in the following sections are executed from the All Volumes panel.
681
Tip: You can also access the Properties action by right-clicking a volume. 3. The Overview tab shows information about a given volume (Figure 10-96).
Note: To obtain more information about the volume, select Show Details
682
4. The Host Maps tab (Figure 10-97) displays the hosts that are mapped with this volume.
5. The Member MDisks tab (Figure 10-98 on page 684) displays the used MDisks for this volume. You can perform actions on the MDisks such as removing them from a pool, adding them to a tier, renaming them, showing their dependent volumes, or seeing their properties.
683
6. When you have finished viewing the details, click Close to return to the All Volumes panel.
3. Select one of the following presets, as shown in Figure on page 685: Generic: Create volumes that use a set amount of capacity from the selected storage pool. Thin Provision: Create volumes whose capacity is large, but which only use the capacity that is written by the host application from the pool. Mirror: Create volumes with two physical copies that provide data protect. Each copy can belong to a different storage pool to protect data from storage failures. 684
IBM System Storage SAN Volume Controller V6.3
Thin Mirror: Create volumes with two physical copies to protect data from failures while using only the capacity that is written by the host application. Note: For our example we chose the Generic preset. However, whatever the selected preset is, you have the opportunity afterwards to reconsider your decision by customizing the volume using the Advanced... button. 4. After selecting a preset, in our example Generic, you must select the Storage Pool on which the data will be striped (Figure 10-100).
5. After the Storage Pool has been selected, the window will be updated automatically and you will have to select a volume name and size as shown in Figure 10-101 on page 686. Enter a name if you want to create a single volume, or a naming prefix if you want to create multiple volumes. Volume name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The host name can be between one and 63 characters in length. Enter the size of the volume that you want to create and select the capacity measurement (bytes, KB, MB, GB or TB) from the list. Note: An entry of 1 GB uses 1024 MB. An updated summary automatically appears in the bottom of the window to give you an idea of the space that will be used and that is remaining in the pool.
685
Various optional actions are available from this window: You can modify the Storage Pool by clicking Edit. In this case, you can select another storage pool. You can create additional volumes by clicking the button. This action can be repeated as many times as necessary. You can remove them by clicking the button. Note: When you create more than one volume, the wizard does not ask you for a name for each volume to be created. Instead, the name that you use here will become the prefix and have a number, starting at zero, appended to it as each volume is created. 6. You can activate and customize advanced features such as thin-provisioning or mirroring, depending on the preset you selected. To access these settings, click Advanced...: On the Characteristics tab (Figure 10-102 on page 687), you can set the following options: General: Format the new volume by selecting the Format Before Use check box (formatting writes zeros to the volume before it can be used; that is, it will write zeros to its MDisk extents). Locality: Choose an I/O Group and then select a preferred node. OpenVMS only: Enter the UDID (OpenVMS). This field needs to be completed only for OpenVMS system. Note: Each OpenVMS fibre-attached volume requires a user-defined identifier or unit device identifier (UDID). A UDID is a nonnegative integer that is used when an OpenVMS device name is created. To recognize volumes, OpenVMS issues a UDID value, which is a unique numerical number.
686
On the Thin Provisioning tab (Figure 10-103 on page 688), after you activate thin provisioning by selecting the Thin provisioning check box, you can set the following options: Real: Type the Real size that you want to allocate. This size is the amount of disk space that will actually be allocated. It can either be a percentage of the virtual size or a specific number in GB. Automatically Expand: Select auto expand, which allows the real disk size to grow as required. Warning Threshold: Type a percentage or select a specific size for the usage threshold warning. It will generate a warning when the used disk capacity on the space-efficient copy first exceeds the specified threshold. Thin-Provisioned Grain size: Select the Grain size (32 KB, 64 KB, 128 KB or 256 KB). Smaller grain sizes save space and larger grain sizes produce better performance. Try to match the FlashCopy grain size if the volume will be used for FlashCopy.
687
Important: If the Thin Provision or Thin Mirror preset is selected on the first page (Figure on page 685), the Thin provisioning check box is already selected and the parameter presets are the following: Real: 2% of Virtual Capacity Automatically Expand: Selected Warning Threshold: Selected with a value 80% of Virtual Capacity Thin-Provisioned Grain size: 32 KB On the Mirroring tab (Figure 10-104 on page 689), after you activate mirroring by selecting the Create Mirrored Copy check box, you can set the following option: Mirror Sync Rate: Enter the Mirror Synchronization rate. It is the I/O governing rate in a percentage that determines how quickly copies are synchronized. A zero value disables synchronization. Important: If you activate this feature from the Advanced menu, you will have to select a secondary pool on the main window (Figure 10-101 on page 686). The Primary Pool is going to be used as the primary and preferred copy for read operations. The secondary pool will be used as the secondary copy.
688
Important: If the Mirror or Thin Mirror preset is selected on the first page (Figure on page 685), the Mirroring check box is already selected and the parameter preset is the following: Mirror Sync Rate: 80% of Maximum 7. After all the advanced settings have been set, click OK to return to the main menu (Figure 10-101 on page 686). 8. Then, you have the choice to only create the volume using the Create button, or to create and map it using the Create and Map to Host button. If you select to only create the volume, you will return to the main All Volumes panel and you will see your volume created but not mapped (Figure 10-105). You can map it later.
If you want to create and map it on the volume creation window, click the Continue button and another window opens. In the Modify Mappings window, select on which host you want to map this volume by using the drop-down button and then clicking Next (Figure 10-106 on page 690).
689
In the Modify Mappings window, verify the mapping. If you want to modify it, select the volume or volumes that you want to map to a host and move each of them to the right table using the right arrow, as shown in Figure 10-107. If you need to remove them, use the left arrow.
In the right table, you can edit the SCSI ID by selecting a mapping that is highlighted in yellow, indicating that the mapping is new. Next, click Edit SCSI ID (shown in Figure 10-86 on page 677). Note: Only new mappings can have their SCSI ID changed. To edit an existing mapping SCSI ID, you must unmap the volume and recreate the map to the volume. In the Edit SCSI ID window, change the SCSI ID then click OK (Figure 10-108 on page 691).
690
After all volumes that you wanted to map to this host have been added, click OK to create the Host mapping relationships and finalize the volume creation. You will return to the main All Volume window and see your volume created and mapped as shown in Figure 10-109.
691
Tip: There are two other ways to rename a volume. You can right-click a volume and select Rename from the list, or you can use the method explained in Figure 10.7.4 on page 692.
3. In the Rename Volume window, type the new name that you want to assign to the volume, and click OK (Figure 10-111).
Volume name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The volume name can be between one and 63 characters in length.
692
Tip: You can also right-click a volume and select Properties from the list. 3. In the Overview tab, click Edit to modify parameters for this volume (Figure 10-113 on page 694). From this window, you can modify the following parameters: Volume Name: You can modify the volume name. Volume name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore. The host name can be between one and 63 characters in length. I/O Group: You can select an alternate I/O Group from the list to alter the I/O Group to which it is assigned. You can also select the Force check box. This option changes the I/O group when the cache state is either Not Empty or corrupts and stops synchronization for mirrored volumes.
Preferred node: You can change the preferred node for this volume. Hosts try to access the volume through the preferred node. By default, the system automatically balances the load between nodes. Mirror Sync Rate: Change the Mirror Sync rate. It is the I/O governing rate in a percentage that determines how quickly copies are synchronized. A zero value disables synchronization. Cache Mode: By uncloaking the check box, the SVC cache is disabled (read/write cache is disabled) OpenVMS: Enter the UDID (OpenVMS). This field needs to be completed only for an OpenVMS system.
693
Note: Each OpenVMS fibre-attached volume requires a user-defined identifier or unit device identifier (UDID). A UDID is a nonnegative integer that is used when an OpenVMS device name is created. To recognize volumes, OpenVMS issues a UDID value, which is a unique numerical you will number.
4. Save the changes by clicking Save. 5. You can close the Host Details window by clicking Close.
694
Tip: You can also right-click the volume and select Volume Copy Actions Thin Provisioned Edit Properties from the list. For a mirrored volume: Select the thin-provisioned copy of the mirrored volume that you want to modify. In the Actions menu, click Thin Provisioned Edit Properties as shown in Figure 10-116.
695
Tip: You can also right-click the thin provisioned copy and select Thin Provisioned Edit Properties from the list.
2. The Edit Properties: volumename (where volumename is the volume that you selected in the previous step) window opens (Figure 10-117). From this window, you are able to modify: Warning Threshold: Type a percentage. It will generate a warning when the used disk capacity on the thin-provisioned copy first exceeds the specified threshold. Automatically Expand: Autoexpand allows the real disk size to grow as required automatically.
Note: You can modify the real size of your thin-provisioned volume by using the GUI. Refer to 10.7.12, Shrinking the real capacity of a thin-provisioned volume on page 709 or 10.7.13, Expanding the real capacity of a thin provisioned volume on page 712, depending on your needs.
696
Tip: You can also right-click a volume and select Delete from the list.
3. The Delete Volume window opens as shown in Figure 10-119 on page 698. In the field Verify the number of volumes that you are deleting, enter a value matching the correct number of volumes that you want to remove. This verification has been added to secure the process of deleting wrong volumes. Important: Deleting a volume is a destructive action for user data residing in that volume. If you still have a volume (or volumes) associated with a host (or hosts) used with FlashCopy or remote copy, and you definitely want to delete the volume (or volumes), select the Delete the volume even if it has host mappings or is used in FlashCopy mappings or remote-copy relationships. option. Click Delete to complete the operation (Figure 10-119 on page 698).
697
3. On the Modify Mappings window, select the host on which you want to map this volume using the drop-down button and then click Next (Figure 10-106 on page 690).
698
Figure 10-121 Select the host to which you want to map your volume
4. On the Modify Mappings window, verify the mapping. If you want to modify it, select the volume or volumes that you want to map to a host and move each of them to the right table using the right arrow as shown in Figure 10-122. If you need to remove them, use the left arrow.
In the right table, you can edit the SCSI ID. Select a mapping that is highlighted in yellow, which indicates that the mapping is new, and click Edit SCSI ID (shown in Figure 10-86 on page 677). Note: Only new mappings can have their SCSI ID changed. To edit an existing mapping SCSI ID, you must unmap the volume and recreate the map to the volume. In the Edit SCSI ID window, change the SCSI ID then click OK (Figure 10-123 on page 700).
699
5. After all the volumes you want to map to this host have been added, click OK. You will return to the main All Volumes panel.
700
Tip: You can also right-click a volume and select Properties from the list. 3. On the Properties window, click the Host Maps tab (Figure 10-125).
701
Note: You can also access this window by selecting the volume in the table and clicking View Mapped Hosts in the Actions menu (Figure 10-126).
702
4. Select the host mapping or mappings that you want to remove. 5. Click Unmap from Host (Figure 10-127).
In the Unmap Host window (Figure 10-128 on page 703), in the field Verify the number of hosts that this operation affects: enter a value matching the correct number of ports that you want to remove. This verification has been added to secure the process of deleting wrong hosts.
6. Click Unmap to remove the host mapping or mappings. This action returns you to the Host Maps window. 7. Click Close to return to the main All Volumes panel.
703
Tip: You can also right-click a volume and select Unmap All Hosts from the list.
3. In the Unmap from Hosts window (Figure 10-130), in the field Verify the number of mappings that this operation affects: enter a value matching the correct number of ports that you want to remove. This verification has been added to secure the process of deleting wrong hosts.
704
4. Click Unmap to remove the host mapping or mappings. This action returns you to the All Volumes panel.
705
1. Perform any necessary steps on your host to ensure that you are not using the space that you are about to remove. 2. Select the volume that you want to shrink in the table. 3. Click Shrink in the Actions menu (Figure 10-131).
Tip: You can also right-click a volume and select Shrink from the list.
4. The Shrink Volume: volumename window (where volumename is the volume that you selected in the previous step) opens. See Figure 10-132 on page 707. You can either enter how much you want to shrink the volume using the field Shrink By or you can directly enter the final size that you want to use for the volume using the field Final Size. The other field will be computed automatically. For example, if you have a 20 GB disk and you want it to become 15 GB, you can specify 5 GB in Shrink By field or you can directly specify 15 GB in Final Size field as shown in Figure 10-132 on page 707. 5. When you are finished, click Shrink as shown in Figure 10-132 on page 707, and the changes become visible on your host.
706
707
Tip: You can also right-click a volume and select Expand from the list.
3. The Expand Volume: volumename window (where volumename is the volume that you selected in the previous step) opens; see Figure 10-134 on page 709. You can either enter how much you want to enlarge the volume by using the field Expand By, or you can directly enter the final size that you want to use for the volume by using the field Final Size. The other field will be computed automatically. For example, if you have a 10 GB disk and you want it to become 20 GB, you can specify 10 GB in the Expand By field or you can directly specify 20 GB in the Final Size field as shown in Figure 10-134 on page 709. Volume expansion notes: No support exists for the expansion of image mode volumes. If there are insufficient extents to expand your volume to the specified size, you receive an error message. If you use volume mirroring, all copies must be synchronized before expanding. 4. When you are finished, click Expand (see Figure 10-134 on page 709).
708
Figure 10-135 Figure 10-136 on page 710 Non-mirrored volume: Thin provisioned shrink action menu
709
Tip: You can also right-click the volume and select Volume Copy Actions Thin provisioned Shrink from the list. For a mirrored volume: Select the thin-provisioned copy of the mirrored volume that you want to modify and in the Actions menu, click Thin Provisioned Shrink as shown in Figure 10-137.
710
Tip: You can also right-click the thin provisioned copy and select Thin Provisioned Shrink from the list.
2. The Shrink Volume: volumename window (where volumename is the volume that you selected in the previous step) opens; see Figure 10-138. You can either enter how much you want to shrink the volume by using the field Shrink By, or you can directly enter the final real capacity that you want to use for the volume by using the field Final Real Capacity. The other field will be computed automatically. For example, if you have a current real capacity equal to 118.8 MB and you want a final real size equal to 10 MB, you can specify 108.8 MB in the Shrink By field, or you can directly specify 10 MB in the Final Real Capacity field as shown in Figure 10-138. 3. When you are finished, click Shrink (Figure 10-138) and the changes will become visible on your host.
711
Tip: You can also right-click the volume and select Volume Copy Actions Thin provisioned Expand from the list.
For a mirrored volume: Select the thin provisioned copy of the mirrored volume that you want to modify and in the Actions menu, then click Thin Provisioned Expand (Figure 10-140).
712
Tip: You can also right-click the thin provisioned copy and select Thin Provisioned Expand from the list.
2. The Expand Volume: volumename window (where volumename is the volume that you selected in the previous step) opens (Figure 10-141). You can either enter how much you want to expand the volume using the field Expand By, or you can directly enter the final real capacity that you want to use for the volume using the field Final Real Capacity. The other field will be computed automatically. For example, if you have a current real capacity equal to 10 MB and you want a final real size equal to 100 MB, you can specify 90 MB in the Expand By field or you can directly specify 100 MB in the Final Real Capacity field, as shown in Figure 10-141. 3. When you are finished, click Expand (Figure 10-141) and the changes will become visible on your host.
713
1. Select the volume that you want to migrate in the table. 2. Click Migrate to Another Pool in the Actions menu (Figure 10-142).
Tip: You can also right-click a volume and select Migrate to Another Pool from the list. 3. The Migrate Volume Copy window opens (Figure 10-143). Select the Storage Pool to which you want to reassign the volume. You will only be presented with a list of Storage Pools with the same extent size. 4. When you have finished making your selections, click Migrate to begin the migration process.
714
Important: After a migration starts, you cannot stop it. Migration continues until it is complete unless it is stopped or suspended by an error condition, or the volume that is being migrated is deleted.
5. You can check the migration using the Running Tasks menu (Figure 10-144 on page 715).
To expand this area, click the icon and then click Migration. Figure 10-145 shows a detailed view of the running tasks.
Chapter 10. SAN Volume Controller operations using the GUI
715
6. When the migration is finished, the volume will be part of the new pool.
716
Tip: You can also right-click a volume and select Volume Copy Actions and then Add Mirrored Copy from the list. 3. The Add Volume Copy: volumename window (where volumename is the volume that you selected in the previous step) opens (Figure 10-147 on page 718). You can perform the following steps separately or in combination: Select the Storage Pool in which you want to put the copy. To maintain higher availability, choose a separate group. Select the Enable Thin Provisioning check box to make the copy space-efficient. The following parameters are used for this thin-provisioned copy: Real Size: 2% of Virtual Capacity Automatically Expand: Active Warning Threshold: 80% of Virtual Capacity Thin-Provisioned Grain size: 32 KB
717
Note: Real Size, Auto expand, and Warning Threshold can be changed only after the thin-provisioned volume copy has been added. For information about modifying the real size of your thin-provisioned volume, see 10.7.12, Shrinking the real capacity of a thin-provisioned volume on page 709 and 10.7.13, Expanding the real capacity of a thin provisioned volume on page 712. For information about modifying the Auto expand and Warning Threshold of your thin provisioned volume, see 10.7.5, Modifying thin-provisioning volume properties on page 694. 4. Click Add Copy (Figure 10-147).
5. You can check the migration using the Running Tasks menu (see Figure 10-144 on page 715). To expand this Status Area, click the icon and click Volume Synchronization. Figure 10-148 on page 719 shows a detailed view of the running tasks.
718
Note: You can change the Mirror Sync Rate (the default is 50%) by modifying the volume properties. For more information, see Figure 10.7.4 on page 692. 6. When synchronization is finished, the volume will be part of the new pool (Figure 10-149).
Note: As shown in Figure 10-149, the primary copy is identified with an asterisk (*). In this example, Copy 0 is the primary copy.
719
Tip: You can also right-click a volume and select Delete this Copy from the list.
2. The Warning window opens (Figure 10-151). Click OK to confirm your choice.
Note: If you try to remove the primary copy, before it has been synchronized with the other one, you will receive the message: The command failed because the copy specified is the only synchronized copy. You must wait until the end of the synchronization to be able to remove this copy. 3. The copy is now deleted.
720
Tip: You can also right-click a volume and select Split into New Volume from the list.
2. The Split Volume Copy window opens (Figure 10-153). In this window, type a name for the new volume. Volume name: If you do not provide a name, the SVC automatically generates the name vdiskx (where x is the ID sequence number that is assigned by the SVC internally). If you want to provide a name, you can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore. The host name can be between one and 63 characters in length. 3. Click Split Volume Copy (Figure 10-153).
4. This new volume is now available to be mapped to a host. Important: After you split a volume mirror, you cannot resynchronize or recombine them. You must create a volume copy from scratch.
721
2. The Validate Volume Copies window opens (Figure 10-155). In this window, select one of the following options: Generate Event of differences: Use this option if you only want to verify that the mirrored volume copies are identical. If any difference is found, the command stops and logs an error that includes the logical block address (LBA) and the length of the first difference. You can use this option, starting at a different LBA each time, to count the number of differences on a volume. Overwrite differences: Use this option to overwrite contents from the primary volume copy to the other volume copy. The command corrects any differing sectors by copying the sectors from the primary copy to the copies being compared. Upon completion, the command process logs an event. This indicates the number of differences that were corrected. Use this option if you are sure that either the primary volume copy data is correct, or that your host applications can handle incorrect data. Return Media Error to Host: Use this option to convert sectors on all volumes copies that contain different contents into virtual medium errors. Upon completion, the command logs an event, which indicates the number of differences that were found, the number that were converted into medium errors, and the number that were not converted. Use this option if you are unsure what the correct data is, and you do not want an incorrect version of the data to be used.
722
3. Click Validate (Figure 10-155 on page 723). 4. The volume is now checked.
723
Tip: You can also right-click a volume and select Volume Copy Actions then Add Mirrored Copy from the list. 3. The Add Volume Copy: volumename window (where volumename is the volume that you selected in the previous step) opens (Figure 10-157 on page 724). You can perform the following steps separately or in combination: Select the Storage Pool in which you want to put the copy. To maintain higher availability, choose a separate group. Select the Enable Thin Provisioning check box to make the copy space-efficient. The following parameters are used for this thin-provisioned copy: Real Size: 2% of Virtual Capacity Automatically Expand: Active Warning Threshold: 80% of Virtual Capacity Thin-Provisioned Grain size: 32 KB Note: Real Size, Auto expand, and Warning Threshold can be changed after the volume copy has been added in the GUI. For the Thin-Provisioned Grain size, you need to use the CLI. 4. Click Add Copy.
5. You can check the migration using the Running Tasks Status Area menu as shown in Figure 10-144 on page 715. To expand this Status Area, click the icon and click Volume Synchronization. Figure 10-158 shows the detailed view of the running tasks. 724
IBM System Storage SAN Volume Controller V6.3
Note: You can change the Mirror Sync Rate (by default at 50%) by modifying the volume properties. For more information, see Figure 10.7.4 on page 692. 6. When the synchronization is finished, select the non thin-provisioned copy that you want to remove in the table and in the Actions menu, click Delete this Copy (Figure 6).
Tip: You can also right-click a volume and select Delete this Copy from the list. 7. The Warning window opens (Figure 10-160). Click OK to confirm your choice.
Note: If you try to remove the primary copy before it has been synchronized with the other one, you will receive the following message: The command failed because the copy specified is the only synchronized copy. You must wait till the end of the synchronization to be able to remove this copy.
725
8. When the copy is deleted, your thin-provisioned volume is ready to be used. At this point, you have completed the required tasks to manage volumes within an SVC environment.
726
By using the Consistency Groups panel (Figure 10-162 on page 727). A Consistency Group is a container for mappings. You can add many mappings to a Consistency Group.
By using the FlashCopy Mappings panel (Figure 10-163 on page 728). A FlashCopy mapping defines the relationship between a source volume and a target volume.
727
2. Select the volume that you want to create the FlashCopy relationship for (Figure 10-165).
728
Note: To create many FlashCopy mappings at one time, select multiple volumes by holding down the Ctrl key and using the mouse to select the entries that you want.
Depending on whether or not you have already created the target volumes for your FlashCopy mappings, there are two options: If you have already created the target volumes, see Using existing target volumes on page 729. If you want SVC to create the target volumes you, see Creating new target volumes on page 734.
729
2. The New FlashCopy Mapping window opens (see Figure 10-167). In this window, you have to create the relationship between the source volume (the disk that is copied) and the target volume (the disk that receives the copy). A mapping can be created between any two volumes in a cluster. Select a volume in the Target Volumes column using the drop-down list for your selected Source Volume, then click Add button (Figure 10-194 on page 748). If you need to create other relations, repeat this action. Important: The source and target volumes must be of equal size. So, for a given source volume, only targets of the appropriate size are visible.
730
Note: The volumes do not have to be in the same I/O group or storage pool. 3. Click Next after all relationships that you wanted to create are registered (Figure 10-168).
4. On the next window, select one FlashCopy preset. The GUI interface provides three presets (Snapshot, Clone, Backup) to simplify the more common FlashCopy operations (Figure 10-169). The presets and their use cases are described here: Snapshot Clone Backup Create a copy-on-write point-in-time copy with the following parameters: Creates an exact replica of the source volume on a target volume. The copy can be changed without impacting the original volume. Creates a FlashCopy mapping that can be used to recover data or objects if the system experiences data loss. These backups can be copied multiple times from source and target volumes.
731
For whichever preset you select, you can customize various advanced options. You access these settings by clicking Advanced Settings (Figure 10-170 on page 733). If you prefer not to customize these settings, go directly to step 5 on page 733. You can customize the following options, as shown in Figure 10-170: Background Copy Rate This determines the priority that is given to the copy process. A faster rate increases the priority of the process, which can affect the performance of other operations. Incremental This copies only the parts of the source or target volumes that have changed since the last copy. Incremental copies reduce the completion time of the copy operation. Note: Even if the type of the FlashCopy mapping is incremental, the first copy process copies all of the data from the source to the target volume. Delete after completion This automatically deletes a FlashCopy mapping after the background copy is completed. Do not use this option when the background copy rate is set to zero (0). Cleaning Rate This minimizes the amount of time that a mapping is in the stopping state. If the mapping has not completed, the target volume is offline while the mapping is stopping.
732
5. If you want to include this FlashCopy mapping in a Consistency Group, in the window that shown in Figure 10-171 on page 733, select Yes, add the mappings to a Consistency Group and also select the Consistency Group from the drop-down list.
If you do not want to include this FlashCopy mapping in a Consistency Group, select No, do not add the mappings to a Consistency Group (Figure 10-172).
733
6. Then click Finish as shown in Figure 10-171 and Figure 10-172. 7. Check the result of this FlashCopy mapping (Figure 10-173 on page 734). For each FlashCopy mapping relationship created, a mapping name is automatically generated starting with fcmapX, where X is an available number. If needed, you can rename these mappings; see Figure 10.7.4 on page 692, for more information about this topic.
734
2. On the New FlashCopy Mapping window (Figure 10-175 on page 736), you need to select one FlashCopy preset. The GUI interface provides three presets (Snapshot, Clone, Backup) to simplify the more common FlashCopy operations.
The presets and their use cases are described here: Snapshot Clone Backup Create a copy-on-write point-in-time copy with the following parameters: Creates an exact replica of the source volume on a target volume. The copy can be changed without impacting the original volume. Creates a FlashCopy mapping that can be used to recover data or objects if the system experiences data loss. These backups can be copied multiple times from source and target volumes. Figure 10-175
735
Whichever preset you select, you can customize various advanced options. To access these settings, click Advanced Settings (Figure 10-176 on page 737). If you prefer not to customize these settings, go directly to step 3 on page 737. You can customize the following options, as shown in Figure 10-176 on page 737: Background Copy Rate This determines the priority that is given to the copy process. A faster rate increases the priority of the process, which can affect performance of other operations. Incremental This copies only the parts of the source or target volumes that have changed since the last copy. Incremental copies reduce the completion time of the copy operation. Note: Even if the type of the FlashCopy mapping is incremental, the first copy process copies all of the data from the source to the target volume. Delete after completion This automatically deletes a FlashCopy mapping after the background copy is completed. Do not be use this option when background copy rate is set to 0. Cleaning Rate This minimizes the amount of time that a mapping is in the stopping state. If the mapping has not completed, the target volume is offline while the mapping is stopping.
736
3. If you want to include this FlashCopy mapping in a Consistency Group, in the next window select Yes, add the mappings to a Consistency Group and select the Consistency Group in the drop-down list (Figure 10-177). If you do not want to include this FlashCopy mapping in a Consistency Group, select No, do not add the mappings to a Consistency Group. Choose whichever option you prefer, then click Next (Figure 10-177).
4. In the next window (Figure 10-178 on page 738), select the storage pool that is used to automatically create new targets. You can choose to use the same storage pool that is used by the source volume, or you can select it from a list. In that case, select one storage pool and then click Next.
737
5. Select if you want to have a targeted volume using thin provisioning or not. There are three choices available, as shown in Figure 10-179 on page 738: Yes, in which case enter the following parameters: Real: Type the Real size that you want to allocate. This size is the amount of disk space that will actually be allocated. It can either be a percentage of the virtual size or a specific number in GB. Automatically Expand: Select auto expand, which allows the real disk size to grow as required. Warning Threshold: Type a percentage or select a specific size for the usage threshold warning. It will generate a warning when the used disk capacity on the space-efficient copy first exceeds the specified threshold.
No Inherit properties from source volume Click Finish to complete the FlashCopy Mapping operation.
738
6. Check the result of this FlashCopy mapping, as shown in Figure 10-180. For each FlashCopy mapping relationship created, a mapping name is automatically generated starting with fcmapX where X is an available number. If needed, you can rename these mappings; see Figure 10.7.4 on page 692.
At this point, the FlashCopy mapping is ready to be used. Tip: You can invoke FlashCopy from the SVC GUI, but using the SVC GUI might be impractical if you plan to handle a large number of FlashCopy mappings or Consistency Groups periodically, or at varying times. In such cases, creating a script by using the CLI might be more convenient.
739
Note: The snapshot creates a point-in-time view of production data. The snapshot is not intended to be an independent copy, but instead is used to maintain a view of the production data at the time the snapshot is created. Therefore, the snapshot holds only the data from regions of the production volume that have changed since the snapshot was created. Because the snapshot preset uses thin provisioning, only the capacity that is required for the changes is used. Snapshot preset parameters: No Background Copy Incremental: No Delete after completion: No Cleaning rate: No Target pool is primary copy source pool 1. From the SVC Welcome panel, click Copy Services in the left menu and then, click the FlashCopy panel. 2. Select the volume that you want to snapshot. 3. Click New Snapshot in the Actions menu (Figure 10-181).
740
4. A volume is created as a target volume for this snapshot in the same pool as the source volume. The FlashCopy mapping is created and it is started. You can check the FlashCopy progress in the Progress column or in the Running Tasks column as shown in Figure 10-182 on page 741.
741
4. A volume is created as a target volume for this clone in the same pool as the source volume. The FlashCopy mapping is created and started as shown in Figure 10-184. You can check the FlashCopy progress in the Progress column or in the Running Tasks column.
742
743
4. A volume is created as a target volume for this backup in the same pool as the source volume. The FlashCopy mapping is created and started. You can check the FlashCopy progress in the Progress column or in the Running Tasks column (Figure 10-186 on page 745).
744
745
3. Enter the desired FlashCopy Consistency Group name and click Create (Figure 10-189).
Consistency Group name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The volume name can be between one and 63 characters in length. 4. Figure 10-190 on page 747 shows the result.
746
3. If you select a Consistency Group, click New FlashCopy Mapping in the Actions menu (Figure 10-192).
747
If you did not select a Consistency Group, click New FlashCopy Mapping (Figure 10-193). Consistency Groups: If no Consistency Group is defined, the mapping is a stand-alone mapping, and it can be prepared and started without affecting other mappings. All mappings in the same Consistency Group must have the same status to maintain the consistency of the group.
4. The New FlashCopy Mapping window opens (Figure 10-194). In this window you must create the relationships between the source volumes (the disks that are copied) and the target volumes (the disks that receive the copy). A mapping can be created between any two volumes in a cluster. Important: The source and target volumes must be of equal size.
748
Note: The volumes do not have to be in the same I/O group or storage pool. 5. Select a volume in the Sources Volumes column using the drop-down list, then select a volume in the Target Volumes column using the drop-down list and click Add as shown in Figure 10-194 on page 748. Repeat this action to create other relationships. To remove a relationship that has been created, use the button.
Important: The source and target volumes must be of equal size. So for a given source volume, only the targets with the appropriate size are area. 6. Click Next after all the relationships that you wanted to create are registered (Figure 10-195).
7. In the next window, you need to select one FlashCopy preset. The GUI interface provides three presets (Snapshot, Clone, Backup) to simplify the more common FlashCopy operations (Figure 10-196). The presets and their use cases are described here: Snapshot Clone Backup Create a copy-on-write point-in-time copy with the following parameters: This creates an exact replica of the source volume on a target volume. The copy can be changed without impacting the original volume. This creates a FlashCopy mapping that can be used to recover data or objects if the system experiences data loss. These backups can be copied multiple times from source and target volumes.
749
Whichever preset you select, you can customize various advanced options. To access these settings, click the Advanced Settings button. If you prefer not to customize these settings, go directly to step 8. You can customize the following options as shown in Figure 10-197: Background Copy Rate This determines the priority that is given to the copy process. A faster rate increases the priority of the process, which might affect performance of other operations. Incremental This copies only the parts of the source or target volumes that have changed since the last copy. Incremental copies reduce the completion time of the copy operation. Note: Even if the type of the FlashCopy mapping is incremental, the first copy process copies all of the data from the source to the target volume. Delete after completion This automatically deletes a FlashCopy mapping after the background copy is completed. Do not use this option when background copy rate is set to zero (0). Cleaning Rate This minimizes the amount of time that a mapping is in the stopping state. If the mapping has not completed, the target volume is offline while the mapping is stopping.
8. If you did not create these FlashCopy mappings from a Consistency Group (see step 3 on page 747), you will have to confirm your choice by selecting No, do not add the mappings to a Consistency Group (Figure 10-198 on page 751).
750
9. Click Finish as shown in Figure 10-197 on page 750. 10.Check the result of this FlashCopy mapping in the Consistency Groups window, as shown in Figure 10-199. For each FlashCopy mapping relationship created, a mapping name is automatically generated starting with fcmapX where X is an available number. If needed, you can rename these mappings; see Figure 10.7.4 on page 692.
Tip: You can invoke FlashCopy from the SVC GUI, but using the SVC GUI might be impractical if you plan to handle a large number of FlashCopy mappings or Consistency Groups periodically, or at varying times. In this case, creating a script by using the CLI might be more convenient.
751
In the Dependent Mappings window (Figure 10-201), you can see the dependent mapping for a given volume or a FlashCopy mapping. If you click one of these volumes, you can see its properties. For more information about volume properties, see 10.7.1, Volume information on page 681.
752
4. In the Move FlashCopy Mapping to Consistency Group window, select the Consistency Group for this FlashCopy mapping using the drop-down list (Figure 10-203):
753
In the Remove FlashCopy Mapping from Consistency Group window, click Remove (Figure 10-205).
Tip: You can also right-click a FlashCopy mapping and select Edit Properties from the list. 4. In the Edit Properties window, you can modify the following parameters for a selected FlashCopy mapping as shown in Figure 10-207: Background Copy Rate This determines the priority that is given to the copy process. A faster rate increases the priority of the process, which might affect performance of other operations. Cleaning Rate This minimizes the amount of time that a mapping is in the stopping state. If the mapping has not completed, the target volume is offline while the mapping is stopping.
754
4. In the Rename Mapping window, type the new name that you want to assign to the FlashCopy mapping and click Rename (Figure 10-209 on page 756).
755
FlashCopy name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The mapping name can be between one and 63 characters in length.
3. Type the new name that you want to assign to the Consistency Group and press Rename (Figure 10-211).
Consistency Group name: The name can consist of the letters A to Z and a to z, the numbers 0 to 9, the dash (-), and the underscore (_) character. The name can be between one and 63 characters in length. However, the name cannot start with a number, the dash or the underscore.
756
4. From the Consistency Group panel, the new Consistency Group name is displayed.
4. The Delete Mapping window opens as shown in Figure 10-213 on page 758. In the field Verify the number of FlashCopy mappings you are deleting, you need to enter a value matching the correct number of volumes that you want to remove. This verification has been added to secure the process of deleting wrong mappings. If you still have target volumes that are inconsistent with the source volumes and you definitely want to delete these FlashCopy mappings, then select the Delete the FlashCopy mapping even when the data on the target volume is inconsistent with the source volume option. Click Delete to complete the operation (Figure 10-213).
757
4. The Warning window opens (Figure 10-215). Click OK to complete the operation.
758
4. You can check the FlashCopy progress in the Progress column of the table or in the Running Tasks section (Figure 10-217).
759
5. After the task is completed, the FlashCopy status is in a Copied state (Figure 10-218).
3. Click Start in the Actions menu (Figure 10-220) to start the FlashCopy Consistency Group.
760
4. You can check the FlashCopy Consistency Group progress in the Progress column or in the Running Tasks section (Figure 10-221 on page 761).
5. After the task is completed, the FlashCopy status is in a Copied state (Figure 10-222).
Perform the following steps to stop a FlashCopy Consistency Group: 1. From the SVC Overview panel, click Copy Services and then click the FlashCopy, Consistency Groups, or the FlashCopy Mappings panel. 1. Select the FlashCopy mapping that you want to stop in the table. 2. Click Stop in the Actions menu (Figure 10-223) to stop the FlashCopy mapping.
761
3. Notice that the FlashCopy mapping status has changed to Stopped (Figure 10-224).
4. The targeted volume is now shown as Offline in the Volumes menu (Figure 10-225).
762
Perform the following steps to stop a FlashCopy mapping: 1. From the SVC Welcome panel, click Copy Services and then click the Consistency Groups panel. 1. In the left side of this panel, select the Consistency Group that you want to stop. 2. Click Stop in the Actions menu (Figure 10-226) to stop the FlashCopy Consistency Group.
3. Notice that the FlashCopy Consistency Group status has now changed to Stopped (Figure 10-227 on page 763).
763
Create a FlashCopy mapping with the fully allocated volume as the source and the Space-Efficient volume as the target. Important: The copy process overwrites all of the data on the target volume. You must back up all of the data before you start the copy process.
This capability enables you to reverse the direction of a FlashCopy map without having to remove existing maps, and without losing the data from the target as shown in Figure 10-229.
764
In this section, we describe the tasks that you can perform at a remote copy level. There are two panels to use to visualize and manage your remote copies: 1. The Remote Copy panel, shown in Figure 10-230 on page 765 The Metro Mirror and Global Mirror Copy Services features enable you to set up a relationship between two volumes, so that updates that are made by an application to one volume are mirrored on the other volume. The volumes can be in the same cluster or on two different clusters.
2. The Partnerships panel, shown in Figure 10-231 on page 766 Partnerships can be used to create a disaster recovery environment, or to migrate data between clusters that are in different locations. Partnerships define an association between a local cluster and a remote cluster.
765
766
767
10.9.2 Creating the SVC partnership between two remote SVC Clusters
We perform this operation to create the partnership on both clusters. Note: If you are creating an intracluster Metro Mirror, do not perform this next step to create the SVC cluster Metro Mirror partnership. Instead, go to 10.9.3, Creating stand-alone remote copy relationships on page 770. To create a partnership between the SVC clusters using the GUI, follow these steps: 1. From the SVC Overview panel, click Copy Services Partnerships. The Partnerships panel opens as shown in Figure 10-236.
2. Click the New Partnership button to create a new partnership with another cluster, as shown in Figure 10-237.
3. On the New Partnership window (Figure 10-238 on page 769), complete the following elements: Select an available cluster in the drop-down list. If there is no candidate, you will receive the following error message: This cluster does not have any candidates. Enter a bandwidth (MBps) that is used by the background copy process between the clusters in the partnership. Set this value so that it is less than or equal to the 768
IBM System Storage SAN Volume Controller V6.3
bandwidth that can be sustained by the communication link between the cluster. The link must be able to sustain any host requests and the rate of background copy.
4. Click the Create button to confirm the partnership relation. As shown in Figure 10-239, our partnership is in the Partially Configured state, because we have only performed the work on one side of the partnership so far.
To fully configure the cluster partnership, we must perform the same steps on the other SVC cluster (ITSO_SVC3) as we did on this one (ITSO_SVC2). For simplicity and brevity, only the two most significant windows are shown when the partnership is fully configured. 5. Launching the SVC GUI for ITSO_SVC3, we select ITSO_SVC2 for the cluster partnership and specify the available bandwidth for the background copy, again 200 MBps, and then click Create. Now that both sides of the SVC cluster partnership are defined, the resulting windows shown in Figure 10-240 and Figure 10-241 on page 770 confirm that our cluster partnership is now in the Fully Configured state. Figure 10-240 shows Cluster ITSO-CLS1.
769
3. In the New Relationship window, select the type of relationship that you want to create (Figure 10-243 on page 771): Metro Mirror This is a type of remote copy that creates a synchronous copy of data from a primary volume to a secondary volume. A secondary volume can either be located on the same cluster or on another cluster. Global Mirror This provides a consistent copy of a source volume on a target volume. Data is written to the target volume asynchronously, so that the copy is continuously updated, but the copy might not contain the last few updates in the event that a disaster recovery operation is performed. Global Mirror with change Volumes This provides a consistent copy of a source volume on a target volume. Data is written to the target volume asynchronously so that the copy is continuously updated, Change volumes are used to record changes to the remote copy volume Changes can then be copied to the remote cluster asynchronously Flash Copy relationship exists between remote copy volume and change volume. FlashCopy mapping with Change Volume is for internal use User cannot manipulate it like a normal FlashCopy mapping
770
Figure 10-243 Select the type of relation that you want to create
4. In the next window, select where the auxiliary volumes are located as shown in Figure 10-244: On this system - this means the volumes are located locally On another system - in this case, select the remote system from the drop-down list.
5. In this window you can create new relationships. Select a volume in the Master drop-down list, then select a volume in the Auxiliary drop-down lists for this master and click Add (Figure 10-245 on page 772). If needed, repeat this action to create other relationships. Important: The Master and Auxiliary must be of equal size. So for a given source volume, only the targets with the appropriate size are returned.
771
To remove a relation created, use the button shown in Figure 10-245. After all the relationships that you wanted to create are registered, click Next. 6. Select if the volumes are already synchronized or not as shown in Figure 10-246, then click Next.
7. Finally, on the last window, select if you want to start to copy the data as shown in Figure 10-247 and then click Finish.
The relationships are visible in the Remote Copy panel. If you selected to copy the data, you can see that their status is Inconsistent Copying. You can check the copying progress in the Running tasks as shown in Figure 10-248 on page 773.
772
After the copy is finished, the relationships status changes to Consistent synchronized.
3. Enter a name for the Consistency Group and then click Next (Figure 10-250).
773
Note: If you do not provide a name, the SVC automatically generates the name rccstgrpX, where X is the ID sequence number that is assigned by the SVC internally. You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The Consistency Group can be between 1 and 15 characters in length.
4. In the next window, select where the auxiliary volumes are located as shown in Figure 10-251: On this system - this means the volumes are located locally On another system - in that case, select the remote system in the drop-down list. After you make a selection, click Next.
5. Select if you want to add relationships to this group as shown in Figure 10-252. There are two options: If you answer Yes. click Next to continue the wizard and go to step 6. If you answer No, click Finish to create an empty Consistency Group that can be used later.
774
6. Select the type of relationship that you want to create (Figure 10-253): Metro Mirror This is a type of remote copy that creates a synchronous copy of data from a primary volume to a secondary volume. A secondary volume can either be located on the same cluster or on another cluster. Global Mirror This provides a consistent copy of a source volume on a target volume. Data is written to the target volume asynchronously so that the copy is continuously updated, but the copy might not contain the last few updates in the event that a disaster recovery operation is performed. Global Mirror With Change Volumes This provides a consistent copy of a source volume on a target volume. Data is written to the target volume asynchronously so that the copy is continuously updated, Change volumes are used to record changes to the remote copy volume Changes can then be copied to the remote cluster asynchronously Flash Copy relationship exists between remote copy volume and change volume. FlashCopy mapping with Change Volume is for internal use User cannot manipulate it like a normal FlashCopy mapping Most svctask *fcmap commands will fail. Click Next.
Figure 10-253 Select the type of relation that you want to create
7. As shown in Figure 10-254, you can optionally select existing relationships to add to the group, then click Next. Note: To select multiple relationships, hold down Ctrl and use your mouse to select the entries you want to include.
775
8. In this window, you can create new relationships. Select a volume in the Master drop-down list then select a volume in the Auxiliary drop-down lists for this master. Click Add as shown in Figure 10-255. Repeat this action to create other relationships if needed. Important: The Master and Auxiliary must be of equal size. So for a given source volume, only the targets with the appropriate size are included. To remove a relation created, use the button as shown in Figure 10-255. After all the relationships that you want to create are registered, click Next.
9. Select if the volumes are already synchronized or not, as shown in Figure 10-256, then click Next.
776
10.Finally, on the last window, select if you want to start to copy the data as shown in Figure 10-257 on page 777, and then click Finish.
11.The relationships are visible in the Remote copy panel. If you selected to copy the data, you can see that their status is Inconsistent Copying. You can check the copying progress in the Running tasks as shown in Figure 10-258.
After the copies are completed, the relationships and the Consistency Group changes to Consistent synchronized status.
777
3. Type the new name that you want to assign to the Consistency Group and press Enter (Figure 10-260).
Consistency Group name: The Consistency Group name can consist of the letters A to Z and a to z, the numbers 0 to 9, the dash (-), and the underscore (_) character. The name can be between one and 15 characters in length. However, the name cannot start with a number, the dash, or the underscore.
4. From the Remote Copy panel, the new Consistency Group name is displayed.
778
2. Select the Remote Copy relationship mapping that you want to rename in the table. 3. Click Rename in the Actions menu (Figure 10-261 on page 779). Tip: You can also right-click a Remote Copy relationship and select Rename from the list.
4. In the Rename relationship window, type the new name that you want to assign to the FlashCopy mapping and click OK (Figure 10-262).
Remote Copy relationship name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The Remote Copy name can be between one and 15 characters in length.
779
3. Select the relationship that you want to move to Consistency Group. 4. Click Add to Consistency Group in the Actions menu as shown in Figure 10-263 on page 780. Tip: You can also right-click a Remote Copy relationship and select Add to Consistency Group from the list.
5. In the Add Relationship to Consistency Group window, select the Consistency Group for this Remote Copy relationship using the drop-down list (Figure 10-264).
780
4. Click Remove from Consistency Group in the Actions menu (Figure 10-265 on page 781). Tip: You can also right-click a Remote Copy relationship and select Remove from Consistency Group from the list.
5. In the Remove Relationship From Consistency Group window, click Remove (Figure 10-266).
781
3. Select the Remote Copy relationship that you want to start in the table. 4. Click Start in the Actions menu (Figure 10-267 on page 782) to start the Remote Copy process. Tip: You can also right-click a relationship and select Start from the list.
5. If the relationship was not consistent, the Remote Copy progress can be checked in the Running tasks (Figure 10-268).
6. After the task is completed, the Remote Copy relationship status has a Consistent Synchronized state (Figure 10-218 on page 760).
782
3. Click Start in the Actions menu (Figure 10-271) to start the Remote Copy Consistency Group.
4. You can check the Remote Copy Consistency Group progress as shown in Figure 10-272 on page 784.
783
5. After the task is completed, the Consistency Group and all its relationship statuses are in a Consistent Synchronized state (Figure 10-273).
784
5. A Warning window opens (Figure 10-275). A confirmation is needed to switch the Remote Copy relationship direction. As shown in Figure 10-275, the Remote Copy is switched from the master volume to the auxiliary volume. Click OK to confirm your choice.
6. The copy direction is now switched as shown in Figure 10-276. The auxiliary volume is now accessible and indicated as the primary volume. There is now a synchronization between auxiliary to master volume.
785
Tip: You can also right-click a relationship and select Switch from the list.
4. A Warning window opens (Figure 10-278 on page 787). A confirmation is needed to switch the Consistency Group direction. In the example shown in Figure 10-278 on page 787, the Consistency Group is switched from the master group to the auxiliary group. Click OK to confirm your choice.
786
5. The Remote Copy direction is now switched, as shown in Figure 10-279. The auxiliary volume is now accessible and indicated as primary volume. There is now a synchronization from auxiliary to master volume.
787
5. The Stop Remove Copy Relationship window opens (Figure 10-281). To allow secondary read/write access, select Allow secondary read/write access then click Stop Relationship to confirm your choice.
6. The new relationship status can be checked as shown in Figure 10-282. The relationship is now stopped.
3. Click Stop in the Actions menu (Figure 10-283) to stop the Remote Copy Consistency Group. Tip: You can also right-click a relationship and select Stop from the list.
4. The Stop Remote Copy Consistency Group window opens (Figure 10-284). To allow secondary read/write access, select Allow secondary read/write access then click Stop Consistency Group to confirm your choice.
5. The new relationship status can be checked as shown in Figure 10-285. The relationship is now stopped.
789
4. The Delete Relationship window opens (Figure 10-287 on page 790). In the field Verify the number of relationships you are deleting, enter a value matching the correct number of volumes that you want to remove. This verification has been added to secure the process of deleting wrong relationships. Click Delete to complete the operation (Figure 10-287 on page 790).
790
Perform the following steps to delete a Consistency Group: 1. From the SVC Overview panel, click Copy Services Remote Copy. 2. Select the Consistency Group that you want to delete in the left column. 3. Click Delete in the Actions menu (Figure 10-288).
4. A Warning window opens as shown in Figure 10-289. Click OK to complete the operation.
791
By simply moving the mouse over the tower in the left part of the panel, you are able to view the global storage usage as shown in Figure 10-291 on page 792. Using this method, you can monitor the Physical Capacity and the Used Capacity of your cluster.
792
Figure 10-293 General cluster information Chapter 10. SAN Volume Controller operations using the GUI
793
2. When you click the Info tab, the following information is displayed: General information Name ID Location Capacity information Total MDisk Capacity Space in MDisk Groups Space Allocated to Volumes Total Free Space Total Volume Capacity Total Volume Copy Capacity Total Used Capacity Total Over Allocation
Cluster name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The node name can be between one and 63 characters in length. 4. Click Save. 5. A Warning window opens as shown in Figure 10-295. In fact, if you are using the iSCSI protocol, changing either name also changes the iSCSI Qualified Name (IQN) of all of the nodes in the cluster and might require reconfiguration of all iSCSI-attached hosts. This is because the IQN for each node is generated using the cluster and node names.
794
795
Important: Before shutting down a cluster, quiesce all I/O operations that are destined for this cluster, because you will lose access to all of the volumes that are provided by this cluster. Failure to do so might result in failed I/O operations being reported to your host operating systems. There is no need to quiesce all I/O operations if you are only shutting down one SVC node. Begin the process of quiescing all I/O to the cluster by stopping the applications on your hosts that are using the volumes that are provided by the cluster. If you are unsure which hosts are using the volumes that are provided by the cluster, follow the procedure explained in 9.5.21, Showing the host to which the volume is mapped on page 508, and repeat this procedure for all volumes. From the System Status panel, perform the following steps to shut down your cluster: 1. Click the cluster name as shown in Figure 10-296.
2. Click the Manage tab and then click Shut Down Cluster as shown in Figure 10-297 on page 796.
3. The Confirm Cluster Shutdown cluster window (Figure 10-298) opens. You will receive a message asking you to confirm whether you want to shut down the cluster. Ensure that you have stopped all FlashCopy mappings, Remote Copy relationships, data migration
796
operations, and forced deletions before continuing. Click Yes to begin the shutdown process. Important: At this point, you will lose administrative contact with your cluster.
You have now completed the required tasks to shut down the cluster. At this point you can shut down the uninterruptible power supply units by pressing the power buttons on their front panels. Tip: When you shut down the cluster, it will not automatically start. You must manually start the cluster. If the cluster shuts down because the uninterruptible power supply unit has detected a loss of power, it will automatically restart when the uninterruptible power supply unit detects that the power has been restored (and the batteries have sufficient power to survive another immediate power failure).
Note: To restart the SVC cluster, you must first restart the uninterruptible power supply units by pressing the power buttons on their front panels. After they are on, go to the service panel of one of the nodes within your SVC cluster and press the power on button, releasing it quickly. After it is fully booted (for example, displaying Cluster: on line 1 and the cluster name on line 2 of the SVC front panel), you can start the other nodes in the same way. As soon as all nodes are fully booted and you have reestablished administrative contact using the GUI, your cluster is fully operational again.
797
2. Click the Manage tab and then click Upgrade Cluster as shown in Figure 10-300.
798
2. Click the Info tab to obtain the following information: General information Name ID Numbers of Nodes Numbers of Hosts Numbers of Volumes Memory information FlashCopy Global Mirror and Metro Mirror Volume Mirroring RAID
799
2. Click the Manage tab. 3. From this tab, as shown in Figure 10-303 on page 801, you can modify: The I/O Group name I/O Group name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The host name can be between one and 63 characters in length. The amount of memory for the following features: FlashCopy (default 20 MB - maximum 512 MB) Global Mirror and Metro Mirror (default 20 MB - maximum 512 MB) Volume Mirroring (default 20 MB - maximum 512 MB) RAID (default 40 MB - maximum 512 MB)
Important: For Volume mirroring, Copy Services (FlashCopy, Metro Mirror, and Global Mirror) and RAID operations, memory is traded against memory that is available to the cache. The amount of memory can be decreased or increased. The maximum combined memory size across all features is 552 MB.
800
801
2. Click the Info tab and to obtain the following information: General information Name ID Status Hardware WWNN I/O Group Configuration node Failover Partner node iSCSI Name (IQN) iSCSI Alias Failover iSCSI Name Failover iSCSI Alias if iSCSI Failover is active Serial Number Unique ID WWPNs Status Speed
Redundancy information
iSCSI information
UPS information
Ports information
3. Click the VPD tab to display the vital product data (VPD) for this node. Note: The amount of information in the vital product data (VPD) tab is extensive, so we do not describe it in this section. For the list of these elements, refer to Command-Line Interface User's Guide - Version 6.3.0 and search for the nodevpd command.
802
2. Click the Manage tab. 3. Specify a new name for the node as shown in Figure 10-306.
Node name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The node name can be between one and 63 characters in length.
803
4. Click Save. 5. A Warning window opens as shown in Figure 10-307 on page 804. This is due to the fact that the iSCSI Qualified Name (IQN) for each node is generated using the cluster and node names. If you are using the iSCSI protocol, changing either name also changes the IQN of all of the nodes in the cluster and might require reconfiguration of all iSCSI-attached hosts.
6. To confirm that you want to change the node name, click OK.
Important: Keep in mind that you need to have at least two nodes in an I/O group. Add your available nodes in sequence. 2. Select the node you want to add to your cluster using the drop-down list. Change its name, if needed, and click Add Node as shown in Figure 10-309 on page 805.
804
3. As shown in Figure 10-310, a window appears to inform you about the time required to add a node to the cluster.
4. If you want to add it, click OK. Important: When a node is added to a cluster, it displays a state of adding and a yellow color, as shown in Figure 10-292 on page 793. It can take as long as 30 minutes for the node to be added to the cluster, particularly if the software version of the node has changed.
805
2. Click the Manage tab and then click Remove node as shown in Figure 10-312.
3. A Warning window opens as shown in Figure 10-313 on page 807. By default, the cache is flushed before the node is deleted to prevent data loss if a failure occurs on the other node in the I/O group. In certain circumstances, such as when the system is already degraded, you can take the specified node offline immediately without flushing the cache or ensuring data loss does
806
not occur, by selecting the Bypass check for volumes that will go offline, and remove the node immediately without flushing its cache check box.
If this node is the last node in the cluster the warning message is different, as shown in Figure 10-314. Before you delete the last node in the cluster, ensure that you want to destroy the cluster. Removing the last node in the cluster destroys the cluster. The user interface and any open CLI sessions are lost.
4. If you want to remove it, click OK. This makes the node a candidate to be added back into this cluster or into another cluster.
10.13 Troubleshooting
Events detected by the system are saved in an event log. When an entry is made in this event log, the condition is analyzed and classified to help you diagnose problems.
807
The highest-priority event is indicated, along with information about how long ago the event occurred. It is important to note that if an event is reported, you must select the event and run a fix procedure.
Event properties
To retrieve properties and sense about a specific event, perform the following steps: 1. Select an event in the table. 2. Click Properties in the Actions menu (Figure 10-316 on page 808).
Tip: You can also obtain access to the Properties action by right-clicking an event. 808
3. The Properties and Sense Data for Event sequence_number window (where sequence_number is the sequence number of the event that you selected in the previous step) opens, as shown in Figure 10-317 on page 809.
Tip: From the Properties and Sense Data for Event window, you can use the Previous and Next buttons to navigate between events. 4. Click Close to return to the Recommended Actions panel.
809
Tip: You can also obtain access to the Run Fix Procedure action by right-clicking an event. 3. The Directed Maintenance Procedure window opens as shown in Figure 10-319. You have to follow the wizard and its different steps to fix the event. Note: We do not describe here all the possible steps because the steps involve depend on the event.
810
To access this panel, from the Overview panel shown in Figure 10-1 on page 632, select Monitoring Event Log and then in the left upper corner your choice of what you will see.
Certain alerts have a four-digit error code and a fix procedure that helps you fix the problem. Other alerts also require action, but do not have a fix procedure. Messages are fixed when you acknowledge reading them.
Filtering events
You can filter events in different ways. Filtering can be based on event status (see Basic filtering), or over a period of time (see Time filtering on page 812). Certain events require a certain number of occurrences in 25 hours before they are displayed as unfixed. If they do not reach this threshold in 25 hours, they are flagged as expired. Monitoring events are below the coalesce threshold and are usually transient. You can also sort events by time or error code. When you sort by error code, the most serious events (those with the lowest numbers) are displayed first.
Basic filtering
The event log display can be filtered in three ways using the drop-down menu in the upper right corner of the panel (see Figure 10-321 on page 811): Display all unfixed alerts and messages: Recommended (events requiring attention) Display all alerts and messages: Unfixed Messages and Alerts Display all events alerts, messages, monitoring, and expired: Show all (include below-threshold events)
Figure 10-321 Filter even log display Chapter 10. SAN Volume Controller operations using the GUI
811
Time filtering
There are two ways to perform time filtering: by selecting a start date and time and an end date and time; and by selecting an event and showing the entries within in a certain period of time of this event. In this section we demonstrate both methods. By selecting a start date and time, and an end date and time To use this time frame filter, perform the following steps: Click Filter by Date in the Actions menu (Figure 10-322).
Tip: You can also obtain access to the Filter by Date action by right-clicking an event. The Date/Time Filter window opens (Figure 10-323). From this window, select a start date and time and an end date and time.
Click Filter and Close. Your panel is now filtered based on the time frame. To disable this time frame filter, click Reset Date Filter in the Actions menu (Figure 10-324 on page 813).
812
Select an event and show the entries within in a certain period of time of this event To use this time frame filter, perform the following steps: Select an event in the table. In the Actions menu, click Show entries within... and select minutes, hours, or days and finally select a value (Figure 10-325).
Tip: You can also access the Show entries within... action by right-clicking an event. c. Your window is now filtered based on the time frame (Figure 10-326).
813
To disable this time frame filter, click Reset Date Filter in the Actions menu (Figure 10-327).
Event properties
To retrieve properties and sense about a specific event, perform the following steps: 1. Select an event in the table. 2. Click Properties in the Actions menu (Figure 10-328).
814
Tip: You can also access the Properties action by right-clicking an event.
3. The Properties and Sense Data for Event sequence_number window (where sequence_number is the sequence number of the event that you selected in the previous step) opens, as shown in Figure 10-329.
Tip: From the Properties and Sense Data for Event window, you can use the Previous and Next buttons to navigate between events.
815
Tip: You can also access the Mark as fixed action by right-clicking an event. 3. The Warning window opens (Figure 10-331).
4. Click OK to confirm your choice. Note: To be able to see fixed events, you need to filter the event log panel using the Expanded (include fixed events) filter profile or the Show all (include below-threshold events) filter profile.
816
Tip: You can also access the Mark as unfixed action by right-clicking an event.
817
To run a procedure to fix alert, perform the following steps: 1. Select an alert with a four-digit error code in the table. 2. Click Run Fix Procedure in the Actions menu (Figure 10-334).
Tip: You can also access the Run Fix Procedure action by right-clicking an alert.
3. The Directed Maintenance Procedure window opens (Figure 10-335 on page 818). You must follow the wizard and its steps to fix the event. Note: We do not describe all the various steps, because they depend on the alert.
Clear log
To clear the logs, perform the following steps: 1. Click Clear Log (Figure 10-336).
818
2. A Warning window opens (Figure 10-337). From this window, you must confirm that you want to delete the logs.
819
2. A Download Support Packages window opens (Figure 10-340 on page 821). From there, select which kind of logs you want to download: Standard logs These contain the most recent logs that have been collected for the cluster. These logs are the most commonly used by support to diagnose and solve problems. Standard logs plus one existing statesave These contain the standard logs for the cluster and the most recent statesave from any of the nodes in the cluster. Statesaves are also known as dumps or livedumps. Standard logs plus most recent statesave from each node These contain the standard logs for the cluster and the most recent statesave from each node in the cluster. Statesaves are also known as dumps or livedumps. Standard logs plus new statesaves These generate a new statesave (livedump) for all the nodes in the cluster and packages them with the most recent logs.
820
Note: Depending on your choice, this action can take several minutes to complete.
3. Click Download to confirm your choice (Figure 10-340). 4. Finally, select where you want to save these logs (Figure 10-341).
2. On the detailed view, select the node from which you want to download logs using the drop-down menu in the upper right corner of the panel (Figure 10-343 on page 822).
821
3. Select the package or packages that you want to download (Figure 10-344).
Tip: To select multiple packages, hold down the Ctrl key and use the mouse to select the entries you want to include.
822
Tip: You can also access the Download action by right-clicking a package. 5. Finally, select where you want to save these logs in you workstation. Tip: You can also delete packages by clicking Delete in the Actions menu.
823
Each user account has a name, a role, and password assigned to it, which differs from the Secure Shell (SSH) key-based role approach that is used by the CLI. Note that starting in 6.3 you can access the CLI with password, and no SSH key. We describe authentication in detail in 2.9, User authentication on page 44. The role-based security feature organizes the SVC administrative functions into groups, which are known as roles, so that permissions to execute the various functions can be granted differently to the separate administrative users. There are four major roles and one special role. Table 10-1 on page 825 lists the user roles.
824
Table 10-1 Authority roles Role Security Admin Administrator Allowed Commands All commands All commands except: svctask: chauthservice, mkuser, rmuser, chuser, mkusergrp, rmusergrp, chusergrp, and setpwdreset All svcinfo commands and the following svctask commands: prestartfcconsistgrp, startfcconsistgrp, stopfcconsistgrp, chfcconsistgrp, prestartfcmap, startfcmap, stopfcmap, chfcmap, startrcconsistgrp, stoprcconsistgrp, switchrcconsistgrp, chrcconsistgrp, startrcrelationship, stoprcrelationship, switchrcrelationship, chrcrelationship, and chpartnership All svcinfo commands and the following svctask commands: applysoftware, setlocale, addnode, rmnode, cherrstate, writesernum, detectmdisk, includemdisk, clearerrlog, cleardumps, settimezone, stopcluster, startstats, stopstats, and settime All svcinfo commands and the following svctask commands: finderr, dumperrlog, dumpinternallog, chcurrentuser and the svcconfig command: backup User Superusers Administrators that control the SVC
Copy Operator
Service
For users that perform service maintenance and other hardware tasks on the cluster
Monitor
The superuser user is a built-in account that has the Security Admin user role permissions. You cannot change permissions or delete this superuser account; you can only change the password. You can also change this password manually on the front panels of the cluster nodes. An audit log keeps track of actions that are issued through the management GUI or the command-line interface. For more information about this topic, see 10.14.9, Audit log information on page 837.
825
826
Enter a new user name in the Name field. User name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The user name can be between one and 256 characters in length.
827
Tip: You can also change user properties by right-clicking a user and selecting Properties from the list.
From this window, you can change the authentication mode and Local credentials. Authentication Mode There are two types of authentication available in this section: Local: The authentication method is located on the system. Users must be part of a user group which authorizes them to specific sets of operations. If you select this type of authentication, use the drop-down list to select the user group (Table 10-1 on page 825) that you want the user to be part of.
828
Remote: Remote authentication allows users of SAN management applications, such as IBM Tivoli Storage Productivity Center, to authenticate to the cluster using the authentication service provided by the SAN management application. Ensure that the remote authentication service is configured for the SAN management application. To complete this task, you need the following information regarding the remote authentication service: The web address for the remote authentication service. The user name and password for HTTP basic authentication. These credentials are created by and obtained from the administrator of the remote authentication service. Local Credentials There are two types of local credentials that can be configured in this section depending on your needs: GUI authentication: The Password authenticates users to the management GUI. You need to enter the password in the Password field. Password: The password can be between 6 and 64 characters in length and it cannot begin or end with a space. CLI authentication: The SSH Key authenticates users to the command-line interface. The SSH Public Key need to be uploaded using the Browse... button in the SSH Public Key field. 6. To confirm the changes, click OK (see Figure 10-351 on page 828).
829
4. The Warning window opens (Figure 10-353). Click OK to complete the operation.
830
4. The Warning window opens (Figure 10-355). Click OK to complete the operation.
831
4. The Delete User window opens (Figure 10-357). Click Delete to complete the operation.
832
Enter a name for the group in the Group Name field. Group name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The group name can be between one and 63 characters in length. Role section A role needs to be selected between Monitor, Copy Operator, Service, Administrator or Security Administrator. See Table 10-1 on page 825 for more information about these roles.
Chapter 10. SAN Volume Controller operations using the GUI
833
Note: Remote authentication allows users of SAN management applications, such as IBM Tivoli Storage Productivity Center, to authenticate to the cluster using the authentication service provided by the SAN management application. 4. To create the group name, click Create (Figure 10-359 on page 833). 5. You can verify the creation in the Users panel (Figure 10-360).
834
From this window, you can change the role: Role A role needs to be selected between Monitor, Copy Operator, Service, Administrator or Security Administrator. See Table 10-1 on page 825 for more information about these roles. Note: Remote authentication allows users of SAN management applications, such as IBM Tivoli Storage Productivity Center, to authenticate to the cluster using the authentication service provided by the SAN management application.
835
4. There are two options: If you do not have any users in this group, the Delete User Group window opens as shown in Figure 10-357 on page 832. Click Delete to complete the operation.
836
If you have users in this group, the Delete User Group window opens as shown in Figure 10-365 on page 837. The users of this group will be moved to the Monitor user group.
837
Time filtering
There are two ways to perform time filtering: by selecting a start date and time and an end date and time; and by selecting an event and showing the entries within in a certain period of time of this event. In this section we demonstrate both methods. By selecting a start date and time and an end date and time To use this time frame filter, perform the following steps: Click Filter by Date in the Actions menu (Figure 10-367).
Tip: You can also access the Filter by Date action by right-clicking an entry.
838
The Date/Time Filter window opens (Figure 10-368). From this window, select a start date and time and an end date and time.
Click Filter and Close. Your panel is now filtered based on its time frame. To disable this time frame filter, click Reset Date Filter in the Actions menu (Figure 10-369).
By selecting an entry and showing the entries within in a certain period of time of this event To use this time frame filter, perform the following steps: Select an entry in the table. In the Actions menu, click Show entries within... and select minutes, hours, or days and finally select a value (Figure 10-370).
839
Tip: You can also access the Show entries within... action by right-clicking an entry. Your panel is now filtered based on the time frame (Figure 10-370 on page 840).
To disable this time frame filter, click Reset Date Filter in the Actions menu (Figure 10-372).
840
10.15 Configuration
In this section we describe how to configure different aspects of the SVC.
Management IP addresses
In this section, we discuss the modification of management IP addresses. Management IP addresses can be defined for the system. The system supports one to four IP addresses. You can assign these addresses to two Ethernet ports and their backup ports. Multiple ports and IP addresses provide redundancy for the system in the event of connection interruptions. At any point in time, the system has an active management interface. Ethernet Port 1 must always be configured, and the use of Port 2 is optional. Configuring both ports provides redundancy for the Ethernet connections. If you have configured both ports and you cannot connect through one IP address, attempt to access the system through the alternate IP address. Both IPv4 and IPv6 address formats are supported. Ethernet ports can have either IPv4 addresses or IPv6 addresses, or both. Important: If you specify a new cluster IP address, the existing communication with the cluster through the GUI is lost. You need to relaunch the SAN Volume Controller Application from the GUI Welcome panel. You must use the new IP address to reconnect to the management GUI. When you reconnect, accept the new site certificate. Modifying the IP address of the cluster, although quite simple, requires reconfiguration for other items within the SVC environments, including reconfiguring the central administration GUI by adding the cluster again with its new IP address. Perform the following steps to modify the cluster IP addresses of our SVC configuration: 1. From the SVC Overview panel, select Settings and then Network. 2. In the left column, select Management IP Addresses.
Chapter 10. SAN Volume Controller operations using the GUI
841
4. Click a port to configure the cluster's management IP address. Notice that you can configure both ports on the SVC node (Figure 10-374).
842
5. Depending on whether you select to configure an IPv4 or IPv6 cluster, there is different information to enter. For IPv4: Type an IPv4 address in the IP Address field. Type an IPv4 gateway in the Gateway field. Type an IPv4 Subnet Mask. For IPv6: Select the Show IPv6 button. Type an IPv6 prefix in the IPv6 Network Prefix field. The Prefix field can have a value of 0 to 127. Type an IPv6 address in the IP Address field. Type an IPv6 gateway in the Gateway field. 6. After the information is filled in, click OK to confirm the modification (Figure 10-374 on page 842).
3. Select one node, then click the port you want to assign a service IP address (Figure 10-376 on page 844).
843
4. Depending on whether you installed an IPv4 or IPv6 cluster, there is different information to enter. For IPv4: Type an IPv4 address in the IP Address field. Type an IPv4 gateway in the Gateway field. Type an IPv4 Subnet Mask. For IPv6: Select the Show IPv6 button. Type an IPv6 prefix in the IPv6 Network Prefix field. The Prefix field can have a value of 0 to 127. Type an IPv6 address in the IP Address field. Type an IPv6 gateway in the Gateway field. 5. After the information is filled in, click OK to confirm modification (Figure 10-377).
844
The following parameters can be updated: Cluster Name It is important to set the cluster name correctly because it is part of the iSCSI qualified name (IQN) for the node. Important: If you change the name of the cluster after iSCSI is configured, iSCSI hosts might need to be reconfigured. To change the cluster name, click the cluster name and specify the new name Cluster name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The node name can be between one and 63 characters in length. iSCSI Ethernet Ports iSCSI configuration can be set for each Ethernet ports. Perform the following steps to change an iSCSI IP: Click a port and, depending if you installed an IPv4 or IPv6 cluster, enter the appropriate information. For IPv4: enter an IP address, a gateway and a Subnet Mask. For IPv6: enter an IP prefix, an IP address and a gateway.
After the information is filled in, click OK to confirm modification. Important: When reconfiguring IP ports, be aware that you must reconnect already configured iSCSI connections if changes are made on the IP addresses of the nodes. iSCSI Aliases
845
An iSCSI alias is a user-defined name that identifies the node to the host. Perform the following steps to change an iSCSI alias: Click an iSCSI alias. Specify a name for it. Each node has a unique iSCSI name associated with two IP addresses. After the host has initiated the iSCSI connection to a target node, this IQN from the target node will be visible in the iSCSI configuration tool on the host. iSNS and CHAP You can specify the IP address for the iSCSI Storage Name Service (iSNS). Host systems use the iSNS server to manage iSCSI targets and for iSCSI discovery. You can also enable CHAP to authenticate the system and iSCSI-attached hosts with the specified shared secret. The CHAP secret is the authentication method that is used to restrict access for other iSCSI hosts to use the same connection. You can set the CHAP for the whole cluster under cluster properties or for each host definition. The CHAP must be identical on the server and the cluster/host definition. You can create an iSCSI host definition without using a CHAP.
846
4. A wizard appears (Figure 10-381). You must enter contact information to enable IBM Support personnel to contact this person to assist with problem resolution (Contact Name, Email reply Address, Machine Location and Phone numbers). Ensure that all contact information is valid, then click Next.
847
5. On the next page (Figure 10-382), configure at least one email server that is used by your site and optionally enable inventory reporting. Enter a valid IP address and a server port for each server added. Ensure that the email servers are valid. Inventory reports allow IBM service personnel to proactively notify you of any known issues with your system. To activate it, enable the inventory reporting and choose a reporting interval in this window.
6. Next (Figure 10-383), you can configure email addresses to receive notifications. It is advisable to configure an email address belonging to a support user with the error event notification type enabled, to notify IBM service personnel if an error condition occurs on your system. Ensure that all email addresses are valid.
7. The last window (Figure 10-384 on page 849) displays a summary of your Email Event Notification wizard. Click Finish to complete the setup.
848
8. The wizard is now closed. Additional information has been added to the panel as shown on Figure 10-385. You can edit or disable email notification from this window.
849
Server port The remote port number for the SNMP server. The remote port number must be a value between 1 - 65535. Community The SNMP community is the name of group to which devices and management stations that run SNMP belong. Event notifications Select Error if you want that the user receives messages about problems, such as hardware failures, that must be resolved immediately. Important: Navigate to Recommended Actions to run fix procedures on these notifications. Select Warning if you want that the user receives messages about problems and unexpected conditions. Investigate the cause immediately to determine any corrective action. Important: Navigate to Recommended Actions to run fix procedures on these notifications. Select Info if you want that the user receives messages about expected events. No action is required for these events.
To remove an SNMP Server, use the To add another SNMP Server, use the
button. button.
Syslog notifications
The syslog protocol is a standard protocol for forwarding log messages from a sender to a receiver on an IP network. The IP network can be either IPv4 or IPv6. The system can send syslog messages that notify personnel about an event. 850
You can configure a syslog server to receive log messages from various systems and stores them in a central repository entering the following information (see Figure 10-387): IP Address The address for the syslog server Facility The facility determines the format for the syslog messages and can be used to determine the source of the message. Message format The message format depends on the facility. The system can transmit syslog messages in two formats: Concise message format provides standard detail on the event. Expanded format provides more details about the event. Event notifications Select Error if you want that the user receives messages about problems, such as hardware failures, that must be resolved immediately. Important: Navigate to Recommended Actions to run fix procedures on these notifications. Select Warning if you want that the user receives messages about problems and unexpected conditions. Investigate the cause immediately to determine any corrective action. Important: Navigate to Recommended Actions to run fix procedures on these notifications. Select Info if you want that the user receives messages about expected events. No action is required for these events.
To remove a syslog server, use the To add another syslog server, use the
button. button.
The syslog messages can be sent in either compact message format or expanded message format. Example 10-1 on page 852 shows a compact format syslog message.
851
IBM2145 #NotificationType=Error #ErrorID=077001 #ErrorCode=1070 #Description=Node CPU fan failed #ClusterName=SVCCluster1 #Timestamp=Wed Jul 02 08:00:00 2008 BST #ObjectType=Node #ObjectName=Node1 #CopyID=0 #ErrorSequenceNumber=100 Example 10-2 shows a expanded format syslog message.
Example 10-2 Full format syslog message example
IBM2145 #NotificationType=Error #ErrorID=077001 #ErrorCode=1070 #Description=Node CPU fan failed #ClusterName=SVCCluster1 #Timestamp=Wed Jul 02 08:00:00 2008 BST #ObjectType=Node #ObjectName=Node1 #CopyID=0 #ErrorSequenceNumber=100 #ObjectID=2 #NodeID=2 #MachineType=21454F2#SerialNumber=1234567 #SoftwareVersion=5.1.0.0 (build 8.14.0805280000)#FRU=fan 24P1118, system board 24P1234 #AdditionalData(0->63)=00000000210000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000000000000000000000000#Additional Data(64-127)=000000000000000000000000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000000000000
852
3. From this panel, you can modify: The time zone Select a time zone for your cluster using the drop-down list. The date and time Two options are available: If you are not using a Network Time Protocol (NTP) server, select the Set Date and Time button and then manually enter the date and the time for your cluster as shown in Figure 10-389. You can also use the Use Browser Setting button to automatically adjust date and time of your SVC cluster with your local workstation date and time.
If you are using a Network Time Protocol (NTP) server, select the Set NTP Server IP Address button and then enter the IP address of the NTP server as shown in Figure 10-390.
10.15.10 Licensing
Perform the following steps to configure licensing settings: 1. From the SVC Overview panel, select Settings and then General. 2. In the left column, select Licensing (Figure 10-391 on page 854).
853
3. Set the licensing values for the IBM System Storage SAN Volume Controller for the following elements: Virtualization Limit Enter the capacity of the storage that will be virtualized by this cluster. FlashCopy Limit Enter the capacity that is available for FlashCopy mappings. Important: The used capacity for FlashCopy mapping is the sum of all of the volumes that are the source volumes of a FlashCopy mapping. Global and Metro Mirror Limit Enter the capacity that is available for Metro Mirror and Global Mirror relationships. Important: The used capacity for Global Mirror and Metro Mirror is the sum of the capacities of all of the volumes that are in a Metro Mirror or Global Mirror relationship; both master and auxiliary volumes are counted.
854
3. From here you can configure the following elements: Refresh GUI Objects This action causes the GUI to refresh every one of its views. It clears the GUI cache. The GUI will look up every object again. Important: This is a support only action button. Restore Default Browser Preferences This action deletes all GUI preferences that are stored in the browser and restores default preferences. Table Selection If selected, this action shows Select/Deselect All in each table in the cluster (Figure 10-393).
Navigation If selected, this action shows navigation as tabs when not in low graphics mode (Figure 10-394 on page 855).
855
The format for the software upgrade package name ends in four positive integers separated by dots. For example, a software upgrade package might have the name IBM_2145_INSTALL_6.3.0.0.
The installation and usage of this utility are nondisruptive and do not require restarting any SVC nodes, so there is no interruption to host I/O. The utility is only installed on the current configuration node. System administrators must continue to check whether the version of code that they plan to install is the latest version. You can obtain information about the latest information at this website: http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1001707#_Latest_SAN_Volu me_Controller%20Code This utility is intended to supplement rather than duplicate the existing tests that are carried out by the SVC upgrade procedure (for example, checking for unfixed errors in the error log).
2. Login with your superuser/password; the SVC management home page will display. From there, go to the Settings General menu (Figure 10-396) and click Advanced.
857
3. In the Advanced menu, click the Upgrade Software item; the window shown in Figure 10-397 on page 858 will display.
From the window shown in Figure 10-397, you can click the following buttons: Check for updates: Use this to check, on the IBM website, whether there is an SVC software version available that is newer than the version you have installed in your SVC. You need an Internet connection to perform this check. Launch Upgrade Wizard: Use this to launch the software upgrade process. 4. Click Launch Upgrade Wizard to start the upgrade process; you will be redirected to the window shown in Figure 10-398.
858
From the window shown in Figure 10-398 you can download the Upgrade Test Utility from the IBM website, or you can browse and upload the Upgrade Test Utility from the location where you saved it, as shown in Figure 10-399 on page 859.
5. When the Upgrade Test Utility has been uploaded, the window shown in Figure 10-400 displays.
6. When you click Next (Figure 10-400), the Upgrade Test Utility will be applied. You will be redirected to the window shown in Figure 10-401.
859
7. Click Close (Figure 10-401 on page 860), and you will be redirected to the window shown in Figure 10-402. From here you can run your Upgrade Test Utility for the level you need.
8. Click Next (Figure 10-402), and you will be redirected to the window shown in Figure 10-403. At this point the Upgrade Test Utility will run. You will see the suggested actions (if any are needed) or simply the window shown in Figure 10-403.
9. Click Next (Figure 10-403) to start the SVC software upload procedure, and you will be redirected to the window shown in Figure 10-404.
860
From the window shown in Figure 10-404 you can download the SVC software upgrade package directly from the IBM website, or you can browse and upload the software upgrade package from the location where you saved it, as shown in Figure 10-405 on page 861.
Click Open (Figure 10-405), and you will be redirected on the windows shown in Figure 10-406 and Figure 10-407.
Figure 10-407 shows that the SVC package uploading has completed.
861
10.Click Next and you will be redirected to the window shown in Figure 10-408.
11.When you click Finish (Figure 10-408 on page 862), the SVC software upgrade will start and you will be redirected to the window shown in Figure 10-409.
When you click Close (Figure 10-409), the warning message shown in Figure 10-410 will be displayed.
12.When you click OK (Figure 10-410), you will have completed upgrading the SVC software. Now you are redirected to the window shown in Figure 10-411. 862
IBM System Storage SAN Volume Controller V6.3
After a few minutes the window shown in Figure 10-412 on page 863 will display, showing that the first node has been upgraded.
Now the process will install the new SVC software version on the remaining node in the cluster. You can check the upgrade status as shown in Figure 10-412. 13.After all nodes have been rebooted, you will have completed the SVC software upgrade task.
863
Attention: We do not detail certain actions because those actions must be run under the direction of IBM Support. Do not try to perform actions of this kind without IBM Support. direction. To be able to use the SVC Service Assistant application with the GUI, you must first have a service IP address configured for each node of your cluster. For more information about how to set the SVC service IP address, see 4.4.3, Configuring the Service IP Addresses on page 131. With a supported web browser, address the following link and you will reach the Service Assistant login window (Figure 10-413 on page 864). https://<your service ip address>/service/
Login with your superuser password and you will reach the Service Assistant Home page (Figure 10-414).
864
From the Service Assistant Home page (Figure 10-414) you can obtain an overview of your SVC cluster and the node status. You can view a detailed status and error summary and manage service actions for the current node. The current node is the node on which service-related actions are performed. The connected node displays the Service Assistant and provides the interface for working with other nodes on the system. To manage a different node, select the radio button on the left of your node panel name, and the details for the selected node will be shown. Using the pull-down menu in the Service Assistant Home page, you can select which action you want to execute in the selected node (Figure 10-415).
865
As shown in Figure 10-415, for the selected node it is possible to: Enter in Service State Power off Restart Reload
866
At this point the information window displays (Figure 10-417). Wait until the node is available, then click OK.
Now you will be returned to the Service Assistant Home Page. You will be able to see the status of the node just entered into Service State (Figure 10-418 on page 868). Also note an event code 690, which means several resources have entered a Service State.
867
Now you can have different choices from the Service Assistant Home Page pull-down menu as shown in Figure 10-419. Hold in Service State Power off Restart Reload
868
At this point the information window for your action will display (Figure 10-417 on page 867). Wait until the node is available, then click OK. When the node is available, the window shown in Figure 10-422 displays.
You can see that the node is starting and the event shown in the Error column is simply a regular message. Click Refresh until you can see your node is active and no event is displayed in the Error column. In our example we used the Exit from Service State action from the Service Assistant Home Page, but it is also possible to exit from a Service State using the restart action.
869
On the next confirmation window, wait until the operation completes successfully and then click OK (Figure 10-425 on page 871).
870
From the Service Assistant Home Page, notice that the node that you just rebooted has disappeared (Figure 10-426). This node will still be visible in an Offline State from the GUI or from the SVC command line interface.
The node you just rebooted has to complete before it becomes visible. Normally a node reboot takes about 14 minutes.
To create a support package with the last statesave, select the related option, click Create, and download. The page shown in Figure 10-428 on page 872 is displayed.
871
You will be asked where you want to save the support package (Figure 10-429).
872
873
To re-install the software, the node must either be a candidate node or in service state. During the re-installation, the node becomes unavailable. If the connected node and the current node are the same, the connection to the Service Assistant might be lost. Figure 10-432 shows the Re-install software page. On this page, clicking Check for software updates redirects you to the IBM website, where you will find any available update for the SVC software as shown in the following link. http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1001707#_Latest_SAN_Volu me_Controller%20Code
Attention: We do not detail this procedure because the reinstallation of software action must be run under the direction of IBM support. Do not try to perform this action unless guided by IBM support.
874
875
876
877
878
Appendix A.
879
Performance considerations
When designing an SVC storage infrastructure or maintaining an existing infrastructure, you need to consider many factors in terms of their potential impact on performance. These factors include, but are not limited to: dissimilar workloads competing for the same resources; overloaded resources; insufficient resources available; poor performing resources; and similar performance constraints. There are a few high-level rules you should always keep in-mind when designing your SAN and SVC layout: Host-to-SVC ISL Oversubscription: This is the most significant I/O load across ISLs. The recommendation is to maintain a maximum of 7-to-1 oversubscription. Going higher is possible, but tends to lead to I/O bottlenecks. This also assumes a core-edge design, where the hosts are on the edge and the SVC is on the Core. Storage-to-SVC ISL Oversubscription: This is the second most significant I/O load across ISLs. The maximum oversubscription is 7-to-1. Going higher is not supported. Again, this assumes a multiple switch SAN fabric design. Node-to-Node ISL Oversubscription: This is the least significant load of the three possible oversubscription bottlenecks. In standard setups this can be ignored; while its not entirely negligible, it does not contribute significantly to ISL load. However it is being mentioned here with regard to the split cluster capability that was made available with 6.3.0. When running in this manner, the number of ISL links will become much more important. As with the Storage-to-SVC ISL Oversubscription, this also has a requirement for a maximum of 7-to-1 oversubscription. Exercise caution and careful planning when you determine the number of ISLs to implement. If you need additional assistance, it is recommended you contact your IBM Representative and request technical assistance. ISL Trunking/PortChanneling: For the best performance and availability, you are highly recommended to use ISL Trunking/PortChanneling. The idea here is that independent ISL links can easily become overloaded and turn into performance bottlenecks. Bonded or Trunked ISLs automatically share load and provide better redundancy in the case of a failure. Number of Paths per Host Multipath Device: The maximum supported number of paths per multipath device visible on the host is 8. While SDDPCM and its relatives and indeed most vendor multipathing software will support more, the SVC expects a maximum of 8. In general you will only see performance impact from more than that, and while the SVC will work with more than 8, its technically unsupported. Do not Intermix dissimilar Array Types or Sizes: While the SVC supports intermix of differing storage within Storage Pools, it is best to always use the same array model, same RAID mode, same RAID size (RAID-5 6+P+S does not mix well with RAID-6 14+2), and same drive speeds. 880
IBM System Storage SAN Volume Controller V6.3
Rules and guidelines are no substitution for monitoring performance. Monitoring performance can both provide a validation that design expectations are met and identify opportunities for improvement.
Performance monitoring
This section highlights several performance monitoring techniques.
881
You can define the sampling interval by using the svctask startstats -interval 2 command to collect statistics at two (2) minute intervals; see 9.8.7, Starting statistics collection on page 524. Note: While more frequent collection intervals provide a better detailed view of whats happening within the SVC, it shortens the amount of time historical data is available on the SVC. For example, instead of an 80-minute period of data with the default 5-minute interval, if you were to adjust to 2-minute intervals as above, youd have a 32-minute period instead. Since SVC 5.1.0, cluster-level statistics are no longer supported. Instead, use the per node statistics which are collected. The sampling of the internal performance counters is coordinated across the cluster (by the config node) so that when a sample is taken, all nodes sample their internal counters at the same time. It is important to collect all files from all nodes for a complete analysis. Tools such as TPC will perform this rather intensive data collection on your behalf.
Example A-1 shows typical MDisk, volume, node, and disk drive statistics file names.
Example A-1 Filename of per node statistics IBM_2145:ITSO_SVC3:superuser>svcinfo lsiostatsdumps id iostat_filename 0 Nn_stats_104603_111003_094739 1 Nd_stats_104603_111003_094739 2 Nv_stats_104603_111003_094739 3 Nm_stats_104603_111003_094739 4 Nn_stats_104603_111003_100238 5 Nv_stats_104603_111003_100238 6 Nm_stats_104603_111003_100238 7 Nd_stats_104603_111003_100238 8 Nm_stats_104603_111003_101736 9 Nv_stats_104603_111003_101736 10 Nd_stats_104603_111003_101736 11 Nn_stats_104603_111003_101736 12 Nn_stats_104603_111003_103235 13 Nm_stats_104603_111003_103235 14 Nv_stats_104603_111003_103235 15 Nd_stats_104603_111003_103235
882
16 Nn_stats_104603_111003_104734
Tip: The performance statistics files can be copied from the SVC nodes to a local drive on your workstation using the pscp.exe (included with PuTTY) from an MS-DOS command line, as shown in this example: C:\Program Files\PuTTY>pscp -unsafe -load ITSO-SVC3 admin@10.18.229.81:/dumps/iostats/* c:\statsfiles Use the -load parameter to specify the session that is defined in PuTTY. Specify the -unsafe parameter when you use wildcards. The performance statistics files are in XML format. They can be manipulated using various tools and techniques. An example of a tool that you can use to analyze these files is the SVC Performance Monitor (svcmon). Note: The svcmon tool is not an officially supported tool. It is provided on an as is basis. You can obtain this tool from the following website: http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS3177 Figure A-1 shows an example of the type of chart that you can produce using the SVC performance statistics.
883
884
IBM_2145:ITSO_SVC3:superuser>lsnodestats node_id node_name stat_name stat_current 1 ITSOSVC3N1 cpu_pc 1 1 ITSOSVC3N1 fc_mb 0 1 ITSOSVC3N1 fc_io 1724 ... 2 ITSOSVC3N2 cpu_pc 1 2 ITSOSVC3N2 fc_mb 0 2 ITSOSVC3N2 fc_io 1689 ...
The previous example shows statistics for the two nodes members of cluster ITSO_SVC3, nodes ITSOSVC3N1 and ITSOSVC3N2. For each of this nodes the following columns are displayed: stat_name: Provides the name of the statistic field stat_current: The current value of the statistic field stat_peak: The peak value of the statistic field in the last five minutes stat_peak_time: The time that the peak occurred
On the other side, the lssystemstats command lists the same set of statistics listed with the lsnodestats but representative for all nodes in the cluster. The values for these statistics are calculated from the node statistics values in the following way: Bandwidth: Sum of bandwith of all nodes Latency: Average latency for the cluster. This is calculated using data from the whole cluster, not an average of the single node values IOps: Total IOps of all nodes CPU percentage: Average CPU percentage of all nodes Example A-3 shows the resulting output of the lssystemstats command.
Example A-3 lssystemstats command output
IBM_2145:ITSO_SVC3:superuser>lssystemstats stat_name stat_current stat_peak stat_peak_time cpu_pc 1 1 111003160859 fc_mb 0 0 111003160859 fc_io 1291 1420 111003160504 ...
Table A-1 has a brief description of each of the statistics presented by the lssystemstats and lsnodestats commands.
885
Table A-1 lssystemstats and lsnodestats statistics Field name descriptions Field name cpu_pc fc_mb fc_io sas_mb sas_io iscsi_mb iscsi_io write_cache_pc total_cache_pc vdisk_mb vdisk_io vdisk_ms mdisk_mb mdisk_io mdisk_ms drive_mb drive_io drive_ms vdisk_w_mb vdisk_w_io vdisk_w_ms mdisk_w_mb mdisk_w_io mdisk_w_ms drive_w_mb drive_w_io drive_w_ms vdisk_r_mb vdisk_r_io vdisk_r_ms mdisk_r_mb mdisk_r_io mdisk_r_ms Unit Percentage MB/s IO/s MB/s IO/s MB/s IO/s Percentage Percentage MB/s IO/s Milliseconds MB/s IO/s Milliseconds MB/s IO/s Milliseconds MB/s IO/s Milliseconds MB/s IO/s Milliseconds MB/s IO/s Milliseconds MB/s IO/s Milliseconds MB/s IO/s Milliseconds Description Utilization of node CPUs Fibre-channel bandwidth Fibre-channel throughput SAS bandwidth SAS throughput iSCSI bandwidth iSCSI throughput Write cache fullness. Updated every ten seconds. Total cache fullness. Updated every ten seconds. Total VDisk bandwidth Total VDisk throughput Average VDisk latency MDisk (SAN and RAID) bandwidth MDisk (SAN and RAID) throughput Average MDisk latency Drive bandwidth Drive throughput Average drive latency VDisk write bandwidth VDisk write throughput Average VDisk write latency MDisk (SAN and RAID) write bandwidth MDisk (SAN and RAID) write throughput Average MDisk write latency Drive write bandwidth Drive write throughput Average drive write latency VDisk read bandwidth VDisk read throughput Average VDisk read latency MDisk (SAN and RAID) read bandwidth MDisk (SAN and RAID) read throughput Average MDisk read latency
886
Description Drive read bandwidth Drive read throughput Average drive read latency
The performance monitoring window, shown in Figure A-3 on page 888, is divided in four sections that provide utilization views for the following resources: CPU utilization: shows the overall CPU usage % Volumes: shows the overall volumes utilization with the following fields: Read Write Read latency Write latency Interfaces: shows overall statistics for each of the available interfaces: Fibre Channel iSCSI SAS MDidsks: shows the following overall statistics for the mdisks: Read Write Read latency Write latency
887
You can also select to view performance statistics for each of the available nodes of the system as shown in Figure A-4.
It is also possible to change the metric between MBps or I/Os per second (Figure A-5).
Figure A-5
On any of these views you are able to select with your cursor any point in time to know the exact value and when it occurred. As soon as you place your cursor over the timeline it becomes a dotted line with the different values gathered. This is shown in Figure A-6.
888
For each of the resources, there are different values that you can view selecting the check box next to it. For example, for the MDisks view we show in Figure A-7, the four available fields are selected: Read, Write, Read latency and Write latency.
Performance data collection and Tivoli Storage Productivity Center for Disk
Although you can obtain performance statistics in standard .xml files, that is not the most practical or user-friendly way to analyze SVC performance statistics. The Tivoli Storage Productivity Center (TPC) for Disk is the official and supported IBM tool used to collect and analyze SVC performance statistics. Tivoli Storage Productivity Center for Disk comes preinstalled on the System Storage Productivity Center Console and can be made available by activating the license. For more information about using Tivoli Storage Productivity Center to monitor your storage subsystem, see Monitoring Your Storage Subsystems with TotalStorage Productivity Center, SG24-7364, which is available at the following website: http://www.redbooks.ibm.com/abstracts/sg247364.html?Open Note: Tivoli Storage Productivity Center for Disk for TPC Version 4.2.1 supports new SVC port quality statistics provided in SVC Versions 4.3 and above. Monitoring these metrics in addition to the performance metrics can help you to maintain a stable SAN environment.
889
890
Appendix B.
Terminology
In this appendix we define terms commonly used within this book that relate to the SVC and its concepts. To see the complete set of terms that are related to the SVC, refer to the Glossary section of the SVC Information Center. It is available at the following website: http://publib.boulder.ibm.com/infocenter/svc/ic/index.jsp
891
back-end
See front-end and back-end.
channel extender
A channel extender is a device used for long distance communication connecting other SAN fabric components. Generally, channel extenders can involve protocol conversion to asynchronous transfer mode (ATM), Internet Protocol (IP), or another long distance communication protocol.
cold extent
A cold extent is a volumes extent that will not get any performance benefit if moved from HDD to SSD. A cold extent also refers to an extent that needs to be migrated onto HDD if it currently resides on SSD.
Consistency Group
A Consistency Group is a group of copy relationships between virtual volumes or data sets that are maintained with the same time reference so that all copies are consistent in time. A Consistency Group can be managed as a single entity.
copied
Copied is a FlashCopy state that indicates that a copy has been triggered after the copy relationship was created. The copied state indicates that the copy process is complete, and the target disk has no further dependence on the source disk. The time of the last trigger event is normally displayed with this status.
configuration node
While the cluster is operational, a single node in the cluster is appointed to provide configuration and service functions over the network interface. This node is termed the configuration node. This configuration node manages the information that describes the cluster configuration, and it provides a focal point for configuration commands. If the configuration node fails, another node in the cluster will transparently assume that role.
892
counterpart SAN
A counterpart SAN is a non-redundant portion of a redundant SAN. A counterpart SAN provides all of the connectivity of the redundant SAN, but without the 100% redundancy. SVC nodes are typically connected to a redundant SAN made up of two counterpart SANs. A counterpart SAN is often called a SAN fabric.
disk tier
It is likely that the MDisks (LUNs) presented to the SVC cluster will have different performance attributes due to the type of disk or RAID array that they reside on. The MDisks may be on 15 K RPM Fibre Channel or SAS disk, Nearline SAS or SATA, or even solid state disk (SSDs). Thus, a storage tier attribute is assigned to each MDisk, the default being generic_hdd. SVC 6.1 introduced a new disk tier attribute for SSDs known as generic_ssd.
Easy Tier
Easy Tier is a volume performance function within the SVC that provides automatic data placement of a volumes extents in a multitiered storage pool. The pool will normally contain a mix of SSDs and HDDs. Easy Tier measures host I/O activity on the volumes extents and will migrate hot extents onto the SSDs to ensure maximum performance.
evaluation mode
The evaluation mode is an Easy Tier operating mode in which the host activity on all the volume extents in a pool are measured only. No automatic extent migration is performed.
event (error)
An event is an occurrence of significance to a task or system. Events can include completion or failure of an operation, a user action, or the change in state of a process. Prior to SVC V6.1, this was known as an error.
event code
An event code is a value used to identify an event condition to a user. This value might map to one or more event IDs or to values that are presented on the service panel. This value is used to report error conditions to IBM and to provide an entry point into the service guide.
event ID
An event ID is a value that is used to identify a unique error condition detected by the SVC. An event ID is used internally in the cluster to identify the error.
excluded
The excluded condition is a status condition that describes an MDisk that the SVC has decided to be longer sufficiently reliable to be managed by the cluster. The user must issue a command to include the MDisk in the cluster-managed storage.
extent
An extent is a fixed size unit of data that is used to manage the mapping of data between MDisks and Volumes. The size of the extent can range from 16 MB to 8 GB in size.
Appendix B. Terminology
893
FC port logins
FC port logins refers to the number of hosts that can see any one SVC node port. The SVC has a maximum limit per node port of Fibre Channel logins allowed.
grain
A grain is the unit of data that is represented by a single bit in a FlashCopy bitmap (64 KB/256 KB) in the SVC. It is also the unit to extend the real size of a thin provisioned volume (32, 64, 128, or 256 KB).
host ID
A hod ID is a numeric identifier assigned to a group of host FC ports or iSCSI host names for the purposes of LUN mapping. For each host ID, there is a separate mapping of SCSI IDs to volumes. The intent is to have a one-to-one relationship between hosts and host IDs, although this relationship cannot be policed.
host mapping
Host mapping refers to the process of controlling which hosts have access to specific volumes within a cluster (it is equivalent to LUN masking). Prior to SVC V6.1, this was known as VDisk-to-Host mapping.
hot extent
A hot extent is a frequently accessed volume extent that gets a performance benefit if moved from HDD onto SSD.
internal storage
Internal storage refers to an array of managed disks (MDisks) and drives that are held in enclosures and in nodes that are part of the SVC cluster.
894
image mode
The image mode is an access mode that establishes a one-to-one mapping of extents in an existing LUN or (image mode) MDisk with the extents in a volume.
I/O group
Each pair of SVC cluster nodes is known as an input/output (I/O) group. An I/O group has a set of volumes associated with it that are presented to host systems. Each SVC node is associated with exactly one I/O group. The nodes in an I/O group provide a failover, failback function for each other.
ISL hop
An inter-switch link (ISL) is a connection between two switches and is counted as one ISL hop. The number of hops is always counted on the shortest route between two N-ports (device connections). In an SVC environment, the number of ISL hops is counted on the shortest route between the pair of nodes farthest apart. The SVC recommends maximum hops for some fabric paths.
local fabric
The local fabric is composed of SAN components (switches, cables, and so on) that connect the components (nodes, hosts, and switches) of the local cluster together.
LU and LUN
LUN is formally defined by the SCSI standards as a logical unit number. It is used as an abbreviation for an entity, which exhibits disk-like behavior, for example, a volume or an MDisk.
Appendix B. Terminology
895
mirrored volume
A mirrored volume is a single virtual volume that has two physical volume copies. The primary physical copy is known within the SVC as copy 0, and the secondary copy is known within the SVC as copy 1.
node
An SVC node is a hardware entity that provides virtualization, cache, and copy services for the cluster. SVC nodes are deployed in pairs called I/O groups. One node in a clustered system is designated the configuration node.
oversubscription
The term oversubscription refers to the ratio of the sum of the traffic on the initiator N-port connections to the traffic on the most heavily loaded ISLs, where more than one connection is used between these switches. Oversubscription assumes a symmetrical network, and a specific workload applied equally from all initiators and sent equally to all targets. A symmetrical network means that all the initiators are connected at the same level, and all the controllers are connected at the same level.
preparing phase
Before you start the FlashCopy process, you must prepare a FlashCopy mapping. The preparing phase is used to flush a volumes data from cache in preparation for FlashCopy operation.
RAS
RAS stands for reliability, availability, and serviceability.
RAID
RAID stands for a redundant array of independent disks, with two or more physical disk drives combinded in an array in a certain way, incorporating a RAID level for either failure protection and/or better performance. The most common RAID levels are 0, 1, 5, 6, 10.
RAID 0
RAID 0 is a data striping technique used across an array, no data protection is provided.
RAID 1
RAID 1 is a mirroring technique used on a storage array in which two or more identical copies of data are maintained on separate mirrored disks.
RAID 10
RAID 10 is a combination of a RAID 0 stripe which is mirrored, RAID 1. Thus, two identical copies of striped data exist; there is no parity.
RAID 5
RAID 5 is an array that has a data stripe which includes a single logical parity drive. The parity check data is distributed across all the array's disks.
RAID 6
RAID 6 is a RAID level that has two logical parity drives per stripe, calculated with different algorithms and therefore can continue to process read and write requests to all of the array's virtual disks in the presence of two concurrent disk failures.
896
redundant SAN
A redundant SAN is a SAN configuration in which there is no single point of failure (SPoF), so no matter what component fails, data traffic will continue. Connectivity between the devices within the SAN is maintained, although possibly with degraded performance, when an error has occurred. A redundant SAN design is normally achieved by splitting the SAN into two independent counterpart SANs (two SAN fabrics), so that if one path of the counterpart SAN is destroyed, the other counterpart SAN path keeps functioning.
remote fabric
The remote fabric is composed of SAN components (switches, cables, and so on) that connect the components (nodes, hosts, and switches) of the remote cluster together. There can be significant distances between the components in the local cluster and those components in the remote cluster.
SAN
SAN stands for storage area network.
SCSI
SCSI stands for Small Computer Systems Interface.
Appendix B. Terminology
897
volume
A volume is an SVC logical device that appears to host systems attached to the SAN as a SCSI disk. Each volume is associated with exactly one I/O Group. It will have a preferred node within the I/O group. Prior to SVC 6.1, this was known as a VDisk or virtual disk.
volume copy
A volume copy is a physical copy of the data that is stored on a volume. Mirrored volumes have two such copies. Non-mirrored volumes have one copy.
898
Appendix C.
SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines
899
Introduction
In this chapter for Split I/O Groups we will discuss the available options, configuration, restrictions, and limitations. We will also focus on the diagnostics procedure in order to be able to understand what may be happening in your Split I/O Group environment after a critical event, and to put you in the best position to take the correct decisions. This could mean: Waiting until the failure in one of the two sites is fixed or Declaring a disaster and starting the recovery action that will be described later in this topic
900
No ISL Configuration
In the No ISL configuration each SVC I/O Group consists of two independent SVC nodes. In contrast to a standard SVC environment, nodes from the same I/O Group are not placed close together they are distributed across two sites. If a node fails, the other node in the same I/O Group takes over the workload this is standard in an SVC environment. Volume Mirroring provides a consistent data copy in both sites. If one storage subsystem fails, the remaining subsystem processes the I/O requests. The combination of SVC node distribution in two independent data centers and a copy of data in two independent data centers creates a new level of availability. All SVC nodes and the storage system in a single site may fail; the other SVC nodes will take over the server load using the remaining storage subsystems. The Volume ID, the Volume behavior, and the Volume assignment to the server are still the same. No server reboot, no failover scripts, and thus no script maintenance is required. However, you have to consider that a Split I/O Group typically requires special setup and might exhibit substantially reduced performance. In a Split I/O Group environment, the SVC nodes from the same I/O Group reside in two different sites. A third quorum location is required for handling split brain scenarios. Figure C-1 on page 902 shows an example of a No ISL Split I/O Group configuration as it is currently supported in SVC V5.1.
Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines
901
The Split I/O Group uses the SVC Volume mirroring functionality. Volume Mirroring allows creation of one volume with two copies of MDisk extents; there are not two volumes with the same data on them. The two data copies may be in different MDisk Groups. Thus, Volume Mirroring can minimize the impact to volume availability if one set of MDisks goes offline. The resynchronization between both copies after recovering from a failure is incremental; SVC starts the resynchronization process automatically. Like a standard volume, each mirrored volume is owned by one I/O Group with a preferred node. Thus the mirrored volume will go offline if the whole I/O Group goes offline. The preferred node performs all IO operations, which means reads as well as writes. The preferred node can be set manually. The quorum disk keeps the status of the mirrored volume. The last status (in sync or not in sync) and the definition of Primary and Secondary Volume copy are saved there. Thus, an active quorum disk is required for Volume Mirroring. To ensure data consistency, SVC disables all mirrored volumes if there is no access to any quorum disk candidate any longer. Therefore, quorum disk placement is a very important point with Volume Mirroring and Split I/O Group configuration. Best practices: Drive read I/O to the local storage system. For distances less than 10 km drive the read IO to the faster of two disk subsystems if they are not identical. Take long distance links into account. Preferred node should stay at the same site as the server accessing the volume. The Volume Mirroring primary copy should stay at the same site as the server accessing the volume in order to avoid any potential latency impact where the longer distance solution will be implemented. 902
In many cases there is no independent third site available. It is possible to use an already existing building or computer room from the two main sites to create a third, independent failure domain. There are several things to consider: The third failure domain needs an independent power supply (or UPS). If the hosting site fails, the third failure domain should continue to operate. A separate storage controller for the active SVC quorum disk is required, otherwise the SVC loses multiple quorum disk candidates at the same time if a single storage subsystem fails. Each site (failure domain) should be placed in a different location in case of fire. Fibre Channel cabling should not go through another site (failure domain). Otherwise a fire in one failure domain could destroy the links (and break access) to the SVC quorum disk. As shown in Figure C-1 on page 902 the setup is similar to a standard SVC environment, but the nodes are distributed to two sites. SVC nodes and data are equally distributed across two separate sites with independent power sources, named as separate failure domains (Failure Domain 1 and 2). The quorum disk is located in a third site with a separate power source (Failure Domain 3). For each I/O Group four dedicated fiber-optic links between site 1 and site 2 are required. Where the No ISL configuration will be implemented over 10 km distance, passive WDM devices (without power) can be used to pool multiple fiber-optic links with different wavelengths in one or two connections between both sites. SFPs with different wave length (colored SFPs, ie. SFPs used in CWDM devices) are required here. The maximum distance between both major sites is limited to 40 km. Because we have to prevent the risk of burst traffic (due to lack of buffer to buffer credits), the link speed must be limited, depending on the cable length between nodes in the same I/O Group as shown in Table C-1.
Table C-1 svc code level length and speed
SVC code level >= SVC 5.1 >= SVC 6.3 >= SVC 6.3
Maximum length = 10 km = 20 km = 40 km
The Quorum disk at the third site must be Fibre Channel attached, FCIP can be used if the round trip delay time to the third site is always < 80ms, 40 ms each direction. Figure C-2 on page 904 shows a detailed diagram where passive WDM are used to extend the links between site 1 and site 2.
Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines
903
The best performance server in site 1 should access the volumes in site 1 (preferred node and primary copy in site 1). SVC Volume Mirroring copies the data to Storage 1 and Storage 2. A similar setup should be implemented for servers in site 2 with access to the SVC node in site 2. With the configuration shown in Figure C-2 on page 904 several failover cases are covered: Power off FC Switch 1: FC Switch 2 takes over the load and routes I/O to SVC Node 1 and SVC Node 2. Power off SVC Node 1: SVC Node 2 takes over the load and serves the volumes to the server. SVC Node 2 changes the cache mode to write-through to avoid data loss in case SVC node 2 fails as well. Power off Storage 1: SVC waits a short time (15 to 30 seconds), pauses volume copies on Storage 1 and continues I/O operations using the remaining volume copies on Storage 2. Power off Site 1: Server has no access to the local switches any longer causing access loss. Optional: avoid this access loss by using additional fiber-optic links between site 1 and site 2 for server access.
904
Of course, the same scenarios are valid for site 2, and similar scenarios apply in a mixed failure environment (for example: failure of Switch 1, SVC Node 2, and Storage 2). No manual failover / failback activities are required as SVC performs the failover / failback operation. Usage of AIX Life Partition Mobility or VMware VMotion can increase the number of use cases significantly. Online system migrations including running virtual machines and applications are possible, which is a perfectly acceptable functionality to handle maintenance operations in an appropriate way.
Advantages:
Business Continuity solution distributed across two independent data centers Configuration similar to a standard SVC clustered system Limited hardware effort: passive WDM devices can be used, but not required
Requirements:
Four independent fiber-optic links for each I/O Group between both data centers required LW SFPs with support over long distance required for direct connection to remote storage area network (SAN) Optional usage of passive WDM devices Passive WDM device: no power required for operation Colored SFPs required to make different wave length available Colored SFPs must be supported by the switch vendor Two independent fiber-optic links between site 1 and site 2 recommended Third site for quorum disk placement required Quorum disk storage system must use Fibre Channel for attachment with similar requirements such as Metro Mirror storage (80ms round trip delay time, 40ms each direction)
Bandwidth reduction:
Buffer credits, also called buffer-to-buffer (BB) credits, are used as a flow control method by Fibre Channel technology and represent the number of frames a port can store. Thus, buffer-to-buffer credits are necessary to have multiple Fibre Channel frames in parallel in flight. An appropriate number of buffer-to-buffer credits are required for optimal performance. The number of buffer credits to achieve maximum performance over a given distance actually depends on the speed of the link. 1 buffer credit = 2 km at 1 Gbit/s 1 buffer credit = 1 km at 2 Gbit/s 1 buffer credit = 0.5 km at 4 Gbit/s 1 buffer credit = 0.25 km at 8 Gbit/s The guidelines above give the minimum numbers. The performance drops if there are not enough buffer credits according to link distance and link speed as shown in Figure C-3.
Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines
905
The number of buffer-to-buffer credits provided by an SVC Fibre Channel host bus adapter (HBA) is limited. An HBA of a 2145-CF8 node provides 41 buffer credits which is sufficient for 10 km distance at 8 Gbit/s. The SVC adapters in all earlier models provide only 8 buffer credits which is enough only for 4 km distance with 4 Gbit/s link speed. These numbers are determined by the HBAs hardware and cannot be changed. We suggest to use 2145-CF8 or CG8 nodes for distances longer than 4 km in order to provide enough buffer-to-buffer credits at a reasonable FC speed.
ISL Configuration
Where a longer distance beyond 40 km between Site 1 and Site 2 are required a new configuration needs to be applied. The setup is quite similar to a standard SVC environment, but the nodes are allowed to communicate over long distance using ISL links between both sites using active or passive WDM and a different SAN configuration. The Figure C-4 shows the detailed diagram related to a configuration with active/passive WDM
The Split I/O Group configuration shown in Figure C-4 will support distances of up to 300km (same recommendation as for Metro Mirror).
906
Technically, SVC will tolerate a round trip delay of up to 80ms between nodes. Cache mirroring traffic rather than Metro Mirror traffic is sent across the inter-site link and data is mirrored to back-end storage using Volume Mirroring Data is written by the preferred node to both the local and remote storage, the SCSI Write protocol results in two round trips. This latency is hidden from the application by the write cache. Split I/O Group is often used to move workload between servers at different sites. VMotion or equivalent can be used to move applications between servers, hence applications no longer necessarily issue I/O requests to the local SVC nodes. SCSI Write commands from hosts to remote SVC nodes results in an additional two round trips worth of latency that is visible to the application. For Split I/O Group configurations in a long distance environment it is advisable to use the local site for host I/O. Some switches and distance extenders use extra buffers and proprietary protocols to eliminate one of the round trips worth of latency for SCSI Write commands These devices are already supported for use with SVC, basically they give no benefit or any impact to inter-node communication, but they do benefit the host to remote SVC I/Os and SVC to remote Storage Controller I/Os.
Requirements
A Split I/O Group with ISL configuration must meet the following requirements: Four independent, extended SAN fabrics are shown in Figure C-4 on page 906. Those fabrics will be named Public SAN1, Public SAN2, Private SAN1, Private SAN2. Each Public or Private SAN could be done with a dedicated FC switch or director or could be just a Virtual SAN in a CISCO or Brocade FC switch or director. Two ports per SVC node attached to private SANs Two ports per SVC node attached to public SANs SVC Volume Mirroring between Site 1 and Site 2 Hosts and storage attached to public SANs 3rd site quorum attached to public SANs Figure C-5 shows the possible configurations with a virtual SAN.
Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines
907
Figure C-6 on page 908 shows the possible configurations with a physical SAN.
Use a third site to house a quorum disk. Connections to the third site could be via FCIP because of the distance (no FCIP or FC switches mentioned in the above pictures for simplicity). In many cases there is no independent third site available. It is possible to use an already existing building from the two main sites to create a third, independent failure domain, but you have to consider several things:
908
The third failure domain needs an independent power supply (or UPS). If the hosting site failed, the third failure domain should continue to operate. Each site (failure domain) should be placed in different fire compartments. Fibre Channel cabling should not go through another site (failure domain). Otherwise a fire in one failure domain would destroy the links (and break access) to the SVC quorum disk. Applying these considerations, the SVC clustered system is well protected although two failure domains are in the same building. Consider an IBM Advanced Technical Support (ATS) review or processing a RPQ / SCORE to review the proposed configuration. The storage system that provides the quorum disk at the third site must support extended quorum disks. Storage systems that provide extended quorum support are listed at the following Web site: http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003907 Four active/passive WDM, two per each site, to extend Public and Private SAN over distance. Place independent storage systems at the primary and secondary sites, and use volume mirroring to mirror the host data between storage systems at the two sites. SAN Volume Controller nodes that are in the same I/O Group must be located in two different remote sites.
Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines
909
Example: C-1 Saving SVC configuration IBM_2145:Split_Cluster_1:admin>svcconfig backup ................................................................................. CMMVC6155I SVCCONFIG processing completed successfully IBM_2145:Split_Cluster_1:admin>lsdumps id filename 0 159680.trc.old . . 24 SVC.config.backup.xml_159072
b. Save the produced .xml file in a safe place as shown in Example C-2.
Example: C-2 Copying configuration C:\Program Files\PuTTY>pscp -load SVC_Mainz admin@10.18.229.84:/tmp/SVC.config.backup.xml_159072 c:\temp\clibackup.xml clibackup.xml | 97 kB | 97.2 kB/s | ETA: 00:00:00 | 100%
c. Save the output of the SVC CLI commands in .txt format as shown in Example C-3.
Example: C-3 List of svc commands to be issued and saved lssystem lsnode lsnode node <nodes name> lsnodevpd <nodes name> lsiogrp lsiogrp <iogrps name> lscontroller lscontroller <controllers name> lsmdiskgrp lsmdiskgrp <mdiskgrps name> lsmdisk lsquorum lsquorum <quorum id> lsvdisk lshost lshost <host name> lshostvdiskmap
From the output of the above commands and .xml file we will have a complete picture of the SVC in split I/O Group infrastructure and we will know what was the SVC FC ports WWNN in order to reuse them during the recovery operation described in next topics of this chapter. Example C-4 shows what we need to recreate a Split I/O Group environment after a critical event and we get that from the .xml file.
Example: C-4 xml configuration file <object type="node" > <property name="id" value="1" /> <property name="name" value="node_159072" /> <property name="UPS_serial_number" value="100014P293" /> <property name="WWNN" value="500507680100C109" /> <property name="status" value="online" /> <property name="IO_Group_id" value="0" /> <property name="IO_Group_name" value="io_grp0" /> <property name="partner_node_id" value="2" /> <property name="partner_node_name" value="node_159680" />
910
<property <property <property <property <property <property <property <property <property <property <property <property <property <property <property <property />
name="config_node" value="yes" /> name="UPS_unique_id" value="2040000044802243" /> name="port_id" value="500507680140C109" /> name="port_status" value="active" /> name="port_speed" value="8Gb" /> name="port_id" value="500507680130C109" /> name="port_status" value="active" /> name="port_speed" value="8Gb" /> name="port_id" value="500507680110C109" /> name="port_status" value="active" /> name="port_speed" value="8Gb" /> name="port_id" value="500507680120C109" /> name="port_status" value="active" /> name="port_speed" value="8Gb" /> name="hardware" value="CG8" /> name="iscsi_name" value="iqn.1986-03.com.ibm:2145.splitcluster1.node159072"
<property name="iscsi_alias" value="" /> <property name="failover_active" value="no" /> <property name="failover_name" value="node_159680" /> <property name="failover_iscsi_name" value="iqn.1986-03.com.ibm:2145.splitcluster1.node159680" /> <property name="failover_iscsi_alias" value="" /> <property name="panel_name" value="159072" /> <property name="enclosure_id" value="" /> <property name="canister_id" value="" /> <property name="enclosure_serial_number" value="" /> <property name="service_IP_address" value="9.155.114.14" /> <property name="service_gateway" value="9.155.112.1" /> <property name="service_subnet_mask" value="255.255.240.0" /> <property name="service_IP_address_6" value="" /> <property name="service_gateway_6" value="" /> <property name="service_prefix_6" value="" />
Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines
911
port_status active port_speed 8Gb hardware CG8 iscsi_name iqn.1986-03.com.ibm:2145.splitcluster1.node159072 iscsi_alias failover_active no failover_name node_159680 failover_iscsi_name iqn.1986-03.com.ibm:2145.splitcluster1.node159680 failover_iscsi_alias panel_name 159072 enclosure_id canister_id enclosure_serial_number service_IP_address 9.155.114.14 service_gateway 9.155.112.1 service_subnet_mask 255.255.240.0 service_IP_address_6 service_gateway_6 service_prefix_6
Note: For further and detailed information about how to backup your configuration consult: https://www-304.ibm.com/support/docview.wss?uid=ssg1S1002175 http://publib.boulder.ibm.com/infocenter/svc/ic/index.jsp http://www.redbooks.ibm.com/abstracts/sg247933.html?Open 2. It is strongly recommended to have an up to date copy of the diagram of your environment where all connections (at high level or in detail) are described. 3. It is recommended to have a standard labeling schema and naming convention for your FC or Ethernet cabling and have it fully documented. 4. Backup your SAN zoning. The zoning backup could be done using your FC switch/director command line interface or GUI. The essential zoning configuration data, domain id, zoning, alias, configuration or zone set can be saved in a .txt file using the output from the Cisco or Brocade CLI commands, or using the appropriate vendor utility to backup the entire configuration. Example C-6 shows what we can save in a .txt file using Brocade CLI commands.
Example: C-6 Zoning example IBM-2498-b40-10:FID128:admin>switchshow switchName: IBM-2498-b40-10 switchType: 66.1 switchState: Online switchMode: Native switchRole: Subordinate switchDomain: 10 switchId: fffc0a switchWwn: 10:00:00:05:33:39:7d:78 zoning: ON (SVC_WDM_test) switchBeacon: OFF FC Router: OFF Allow XISL Use: OFF LS Attributes: [FID: 128, Base Switch: No, Default Switch: Yes, Address Mode 0] Index Port Address Media Speed State Proto
912
============================================== 0 0 0a0000 id N4 Online FC F-Port 50:05:07:63:03:30:45:c7 1 1 0a0100 id N4 Online FC F-Port 50:05:07:63:03:38:45:c7 8 8 0a0800 id N8 Online FC F-Port 50:05:07:68:01:10:c4:3f 9 9 0a0900 id N8 No_Light FC . lines ommited for brevity . 23 23 0a1700 id N2 Online FC LE E-Port 10:00:00:05:1e:34:4b:66 "IBM_2005_H16_4" (upstream)(Trunk master) . . lines ommited for brevity . 36 36 0a2400 id N8 No_Light FC IBM-2498-b40-10:FID128:admin> fabricshow Switch ID Worldwide Name Enet IP Addr FC IP Addr Name ------------------------------------------------------------------------2: fffc02 10:00:00:05:1e:34:4b:66 9.155.66.212 0.0.0.0 >"IBM_2005_H16_4" 10: fffc0a 10:00:00:05:33:39:7d:78 9.155.114.11 0.0.0.0 "IBM-2498-b40-10" 32: fffc20 10:00:00:05:33:39:36:49 9.155.114.12 0.0.0.0 "IBM_2498-b40-11" IBM-2498-b40-10:FID128:admin> cfgshow Defined configuration: cfg: SVC_WDM_test ESX_3650_03_DS8K; ESX_3650_03_SVC; MS_3650_05_DS8K; MS_3650_05_SVC; SLES_3650_10; SLES_3650_11; SVC_DR_CL_1; SVC_DR_CL_1_DS8K_S3; SVC_Split_CL_1; SVC_Split_CL_1_DS34_03_CTL_A; SVC_Split_CL_1_DS34_03_CTL_B; SVC_Split_CL_1_DS34_09_CTL_A; SVC_Split_CL_1_DS34_09_CTL_B; SVC_Split_CL_1_DS47_Q_CTL_A; SVC_Split_CL_1_DS47_Q_CTL_B; SVC_Split_CL_1_DS50_CTL_A; SVC_Split_CL_1_DS50_CTL_B; SVC_Split_CL_1_DS8K_S1; SVC_Split_CL_1_DS8K_S2 . lines ommited for brevity . zone: SVC_Split_CL_1 SVC_85_P2; SVC_85_P3; SVC_87_P2; SVC_87_P3 zone: SVC_Split_CL_1_DS34_03_CTL_A DS3400_03_CTL_A_A; SVC_85_P2; SVC_85_P3; SVC_87_P2; SVC_87_P3 zone: SVC_Split_CL_1_DS34_03_CTL_B DS3400_03_CTL_B_B; SVC_85_P2; SVC_85_P3; SVC_87_P2; SVC_87_P3 zone: SVC_Split_CL_1_DS34_09_CTL_A DS3400_09_CTL_A_2; SVC_85_P2; SVC_85_P3; SVC_87_P2; SVC_87_P3 zone: SVC_Split_CL_1_DS34_09_CTL_B DS3400_09_CTL_B_2; SVC_85_P2; SVC_85_P3; SVC_87_P2; SVC_87_P3 . lines ommited for brevity . Effective configuration: cfg: SVC_WDM_test . lines ommited for brevity . zone: SVC_Split_CL_1_DS50_CTL_B 20:15:00:80:e5:18:29:d0 20:25:00:80:e5:18:29:d0 50:05:07:68:01:20:c1:09 50:05:07:68:01:30:c1:09 50:05:07:68:01:20:c4:78
Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines
913
zone:
zone:
50:05:07:68:01:30:c4:78 SVC_Split_CL_1_DS8K_S1 50:05:07:63:03:23:45:c7 50:05:07:63:03:3b:45:c7 50:05:07:68:01:20:c1:09 50:05:07:68:01:30:c1:09 50:05:07:68:01:20:c4:78 50:05:07:68:01:30:c4:78 SVC_Split_CL_1_DS8K_S2 50:05:07:63:03:30:45:c7 50:05:07:63:03:38:45:c7 50:05:07:68:01:20:c1:09 50:05:07:68:01:30:c1:09 50:05:07:68:01:20:c4:78 50:05:07:68:01:30:c4:78
As a best practice, we suggest that during the implementation to use WWNN zoning and during the recovery phase after a critical event, to reuse for as long as possible the same domain id and same port number used in the failing site. Zoning will be propagated on each switch/director because of SAN extension with ISL. More detail on this will be provided later. For further and detailed information about how to back up your FC switch or director zoning configuration consult: Brocade Fabric OS Administrators Guide for appropriate firmware level Or Cisco MDS 9000 Family Command Reference for appropriate firmware level 5. Backup your backend storage subsystems configuration: In your Split I/O Group implementation you may use different storage subsystems provided by vendors other than IBM. Those storage subsystems should be configured following the SVC best practices in order to be used for Volume Mirroring. It is suggested to backup your storage subsystems configuration in order to be in a position to recreate the same environment in case of a critical event when you will be called to re-establish your Split I/O Group infrastructure in a different site with new storage subsystems. More details will be provided on this in later topics. a. For DS3XXX, DS4XXX, or DS5XXX storage subsystems, save in a safe place a copy of an up to date subsystem profile as shown in Figure C-7 on page 915.
914
b. For the DS8000 storage subsystem we suggest to save in .txt format the output of the SVC CLI commands as shown in Example C-7.
Example: C-7 DS8000 commands lsarraysite l lsarray l lsrank l lsextpool l lsfbvol l lshostconnect l lsvolgrp l showvolgrp lunmap <SVC vg_name>
c. For XIV storage subsystem we suggest to save in a .txt format the output of the xcli command as shown in Example C-8, consult your XIV specialist for further suggestions.
Example: C-8 XIV commands host_list host_list_ports mapping_list vol_mapping_list pool_list vol_list
d. For any other supported storage vendor, refer to their documentation in order to save a configuration or report where it will be easy to find the SVC MDisk configuration and mapping.
Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines
915
Diagnosis guidelines
In this section we provide some guidelines regarding how to diagnose a critical event in one of the two sites where the Split I/O Group has been implemented. With these guidelines you will be in a position to understand what is the extent of any damage, what is still running, what can be recovered with which impact on the performance, to the application and service level agreement.
We are assuming that the configuration has been implemented in a campus solution where the distance between Site 1 and Site 2 is <10 Km.
916
Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines
917
tier_free_capacity 1.85TB has_nas_key no layer replication rc_buffer_size 48 IBM_2145:Split_Cluster_1:admin> IBM_2145:Split_Cluster_1:admin>lsnode id name UPS_serial_number WWNN status IO_Group_id IO_Group_name config_node UPS_unique_id hardware iscsi_name iscsi_alias panel_name enclosure_id canister_id enclosure_serial_number 1 node_159072 100014P293 500507680100C109 online 0 io_grp0 yes 2040000044802243 CG8 iqn.1986-03.com.ibm:2145.splitcluster1.node159072 159072 2 node_159680 100013I066 500507680100C478 online 0 io_grp0 no 2040000043640186 CG8 iqn.1986-03.com.ibm:2145.splitcluster1.node159680 159680 IBM_2145:Split_Cluster_1:admin>lsiogrp id name node_count vdisk_count 0 io_grp0 2 34 1 io_grp1 0 0 2 io_grp2 0 0 3 io_grp3 0 0 4 recovery_io_grp 0 0
host_count 4 4 4 4 0
IBM_2145:Split_Cluster_1:admin>lscontroller id controller_name ctrl_s/n vendor_id product_id_high 0 DS3400_03 IBM 1 DS3400_09 IBM 2 DS4700 IBM 3 DS8000 75AAFC1FFFF IBM 4 DS5020 IBM
product_id_low 1726-4xx 1726-4xx 1814 2107900 1814 FAStT FAStT FAStT FAStT
IBM_2145:Split_Cluster_1:admin>lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning easy_tier easy_tier_status 0 S3_DS4700_Q online 1 0 99.50GB 256 99.50GB 0.00MB 0.00MB 0.00MB 0 80 auto inactive 1 DS3400_03_11 online 1 4 299.50GB 256 3.50GB 296.00GB 296.00GB 296.00GB 98 80 auto inactive . lines omitted for brevity . 15 DS5020_12 online 1 4 600.00GB 256 304.00GB 296.00GB 296.00GB 296.00GB 49 80 auto inactive 16 DS5020_22 online 1 4 600.00GB 256 304.00GB 296.00GB 296.00GB 296.00GB 49 80 auto inactive IBM_2145:Split_Cluster_1:admin>lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID tier 0 DS3400_09_11 online managed 5 DS3400_09_11 300.0GB 0000000000000000 DS3400_09 600a0b8000369a8100000ac44e4dd8cc00000000000000000000000000000000 generic_hdd 1 DS3400_09_12 online managed 6 DS3400_09_12 300.0GB 0000000000000001 DS3400_09 600a0b80003743e800000e184e4ddbbb00000000000000000000000000000000 generic_hdd . lines omitted for brevity
918
online managed 15 DS5020_12 600.0GB 0000000000000002 60080e5000182ec60000b0814e560c1e00000000000000000000000000000000 online managed 16 DS5020_22 600.0GB 0000000000000003 60080e5000182ec60000b0864e560c5800000000000000000000000000000000
IBM_2145:Split_Cluster_1:admin>lsvdisk id name IO_Group_id IO_Group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state se_copy_count RC_change 0 SLES_3650_10_01 0 io_grp0 online many many 100.00GB many 60050768018586084800000000000000 0 2 empty 0 no 1 SLES_3650_11_01 0 io_grp0 online many many 100.00GB many 60050768018586084800000000000001 0 2 empty 0 . lines omitted for brevity . 32 test2 0 io_grp0 online many many 10.00GB many 60050768018586084800000000000020 0 2 empty 2 no 33 test3 0 io_grp0 online many many 10.00GB many 60050768018586084800000000000021 0 2 empty 2 no IBM_2145:Split_Cluster_1:admin>lsquorum quorum_index status id name controller_id override 0 online 8 DS4700_SVC_Q1 2 1 online 0 DS3400_09_11 1 2 online 4 DS3400_03_11 0
controller_name active object_type DS4700 DS3400_09 DS3400_03 yes no no mdisk mdisk mdisk yes yes yes
From the SVC CLI command output shown in Example C-9 on page 917 you can see that: The SVC clustered system is accessible through the CLI The SVC nodes are online and one of them is the config node The I/O Groups are in the correct state The subsystem storage controllers are connected The Managed Disk Groups are online The MDisks are online The volumes are online The 3 Quorum disks are in the correct state Now we can check the Volume Mirroring status by running a single SVC CLI command against each single volume as shown in Example C-10.
Example: C-10 Volume mirroring status IBM_2145:Split_Cluster_1:admin>lsvdisk SLES_3650_10_01 id 0 name SLES_3650_10_01 IO_Group_id 0 IO_Group_name io_grp0 status online mdisk_grp_id many mdisk_grp_name many capacity 100.00GB
Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines
919
type many formatted no mdisk_id many mdisk_name many . lines omitted for brevity . copy_id 0 status online sync yes primary yes mdisk_grp_id 1 mdisk_grp_name DS3400_03_11 . lines omitted for brevity . copy_id 1 status online sync yes primary no mdisk_grp_id 13 mdisk_grp_name DS5020_11 . lines omitted for brevity . tier_capacity 100.00GB
From the SVC CLI command output in the Example C-10 on page 919 you can see that: The volume is online The storage pool name and the MDisk name is many, that means Volume Mirroring is in place The Copy id 0 is online, in sync and it is the primary The Copy id 1 is online, in sync and it is the secondary If you have several volumes to check you could create a customized script directly from the SVC shell.
920
Figure C-9 on page 921 and Figure C-10 on page 921 are an example of the Service Assistant Menu in an SVC system with one node failing.
iv. Using a browser connect to one of your SVC nodes service IP address:
https://<ser vice_ip_add>/service/
Login with your SVC system GUI password, and after the login you will be redirected to the Service Assistant Menu as shown in Figure C-10.
Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines
921
From the Service Assistant Menu you will have the chance to bring at least a part of the SVC Clustered system online for further diagnostics. For further and detailed information about Service Assistant Menu refer to: Implementing the IBM System Storage SAN Volume Controller V6.3 SG24-7933 2. If the SVC system management is available: a. Check the status using the SVC CLI by running the commands shown in Example C-11.
Example: C-11 lssystem example IBM_2145:Split_Cluster_1:admin>lssystem id 0000020061618212 name Split_Cluster_1 location local partnership bandwidth total_mdisk_capacity 6.0TB space_in_mdisk_grps 6.0TB space_allocated_to_vdisks 4.63TB total_free_space 1.3TB . lines omitted for brevity . layer replication rc_buffer_size 48
As you can see the SVC clustered system looks accessible, and from the GUI as shown in Figure C-11.
b. Check the status of the nodes as shown in Example C-12 on page 923.
922
Example: C-12 Node status example IBM_2145:Split_Cluster_1:admin>lsnode id name UPS_serial_number WWNN status IO_Group_id IO_Group_name config_node UPS_unique_id hardware iscsi_name iscsi_alias panel_name enclosure_id canister_id enclosure_serial_number 1 node_159072 100014P293 500507680100C109 offline 0 io_grp0 no 2040000044802243 CG8 iqn.1986-03.com.ibm:2145.splitcluster1.node159 072 159072 2 node_159680 100013I066 500507680100C478 online 0 io_grp0 yes 2040000043640186 CG8 iqn.1986-03.com.ibm:2145.splitcluster1.node159 680 159680 IBM_2145:Split_Cluster_1:admin>lsnode node_159072 id 1 name node_159072 UPS_serial_number 100014P293 WWNN 500507680100C109 status offline IO_Group_id 0 IO_Group_name io_grp0 partner_node_id 2 partner_node_name node_159680 config_node no UPS_unique_id 2040000044802243 port_id 500507680140C109 port_status inactive port_speed 8Gb port_id 500507680130C109 port_status inactive port_speed 8Gb port_id 500507680110C109 port_status inactive port_speed 8Gb port_id 500507680120C109 port_status inactive port_speed 8Gb . lines omitted for brevity . service_prefix_6 IBM_2145:Split_Cluster_1:admin>lsnode node_159680 id 2 name node_159680 UPS_serial_number 100013I066 WWNN 500507680100C478 status online IO_Group_id 0 IO_Group_name io_grp0 partner_node_id 1 partner_node_name node_159072 config_node yes UPS_unique_id 2040000043640186 port_id 500507680140C478 port_status active port_speed 8Gb port_id 500507680130C478 port_status active port_speed 8Gb port_id 500507680110C478 Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines
923
port_status active port_speed 8Gb port_id 500507680120C478 port_status active port_speed 8Gb . lines omitted for brevity . service_prefix_6
As you can see from the Example C-12 on page 923: The config node role has moved from node 1 to node 2 Node 1 is offline FC ports in node 1 are inactive Node 2 is online FC ports in node 2 are still online This means that in this event we have lost 50% of the SVC clustered system resources, but the system is still up and running. Using the GUI we can see the same information as shown in Figure C-12.
And from the events screen in the GUI we can get evidence of many and detailed events related to the power loss as shown in Figure C-13 on page 925.
924
As you can see the I/O Group still reports 2 nodes per I/O Group. d. Check the quorum status as shown in Example C-14.
Example: C-14 Quorum status IBM_2145:Split_Cluster_1:admin>lsquorum Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines
925
controller_id controller_name active object_type DS4700 DS3400_03 DS3400_03 yes no no mdisk mdisk mdisk yes
yes
As you can see the active quorum disk is still active because it was not impacted by the critical event, but the quorum index 1 that is in the site that suffered the power failure has flagged it with override ignored. This is because if the original resource located in DS3400_09 went offline and another resource is used instead, then the override field in lsquorum shows as ignored. From the GUI it is not as easy to find the status of the quorum disks. You would have to go through all the MDisks and check in detail which is the active and defined quorum disk as shown in Figure C-14.
2107900 FAStT
926
mdisk_link_count 8 max_mdisk_link_count 8 degraded no vendor_id IBM product_id_low 1726-4xx product_id_high FAStT product_revision 0617 ctrl_s/n allow_quorum yes WWPN 203B00A0B836972A path_count 4 max_path_count 16 WWPN 203A00A0B836972A path_count 2 max_path_count 8 WWPN 202A00A0B836972A path_count 2 max_path_count 8 WWPN 202B00A0B836972A path_count 0 max_path_count 8 IBM_2145:Split_Cluster_1:admin>lscontroller DS3400_09 id 1 controller_name DS3400_09 WWNN 200300A0B8369ACA mdisk_link_count 4 max_mdisk_link_count 4 degraded yes vendor_id IBM product_id_low 1726-4xx product_id_high FAStT product_revision 0617 ctrl_s/n allow_quorum yes WWPN 202300A0B8369ACA path_count 0 max_path_count 4 WWPN 203300A0B8369ACA path_count 0 max_path_count 4 WWPN 202400A0B8369ACA path_count 0 max_path_count 8 WWPN 203400A0B8369ACA path_count 0 max_path_count 4 IBM_2145:Split_Cluster_1:admin>lscontroller DS4700 id 2 controller_name DS4700 WWNN 200400A0B82AB012 mdisk_link_count 1 max_mdisk_link_count 1 degraded yes vendor_id IBM product_id_low 1814 product_id_high FAStT product_revision 0916 ctrl_s/n
Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines
927
allow_quorum yes WWPN 200400A0B82AB013 path_count 0 max_path_count 2 WWPN 200500A0B82AB014 path_count 0 max_path_count 0 WWPN 200400A0B82AB014 path_count 1 max_path_count 2 WWPN 200500A0B82AB013 path_count 0 max_path_count 0 IBM_2145:Split_Cluster_1:admin>lscontroller DS8000 id 3 controller_name DS8000 WWNN 5005076303FFC5C7 mdisk_link_count 0 max_mdisk_link_count 0 degraded yes vendor_id IBM product_id_low 2107900 product_id_high product_revision 3.44 ctrl_s/n 75AAFC1FFFF allow_quorum yes WWPN 50050763033045C7 path_count 0 max_path_count 0 WWPN 50050763032345C7 path_count 0 max_path_count 0 WWPN 50050763033B45C7 path_count 0 max_path_count 0 WWPN 50050763033845C7 path_count 0 max_path_count 0 IBM_2145:Split_Cluster_1:admin>lscontroller DS5020 id 4 controller_name DS5020 WWNN 20040080E51829D0 mdisk_link_count 4 max_mdisk_link_count 4 degraded yes vendor_id IBM product_id_low 1814 product_id_high FAStT product_revision 1060 ctrl_s/n allow_quorum yes WWPN 20340080E51829D0 path_count 0 max_path_count 0 WWPN 20440080E51829D0 path_count 0 max_path_count 4 WWPN 20350080E51829D0
928
path_count 0 max_path_count 4 WWPN 20450080E51829D0 path_count 0 max_path_count 0 WWPN 20140080E51829D0 path_count 0 max_path_count 2 WWPN 20240080E51829D0 path_count 0 max_path_count 2 WWPN 20150080E51829D0 path_count 0 max_path_count 2 WWPN 20250080E51829D0 path_count 0 max_path_count 2
As you can see in the output, some controllers are still accessible from the SVC system and others are no longer accessible because the power loss in site 1 has impacted the SVC node, the storage subsystem, and the FC SAN switches. The same information can be gotten from the GUI as shown in Figure C-15.
Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines
929
f. Check the storage pool status as shown in Example C-16 on page 930.
Example: C-16 Storage pool status IBM_2145:Split_Cluster_1:admin>lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning easy_tier easy_tier_status 0 S3_DS4700_Q online 1 0 99.50GB 256 99.50GB 0.00MB 0.00MB 0.00MB 0 80 auto inactive 1 DS3400_03_11 online 1 4 299.50GB 256 3.50GB 296.00GB 296.00GB 296.00GB 98 80 auto inactive 2 DS3400_03_12 online 1 6 299.50GB 256 3.00GB 316.00GB 296.00GB 296.43GB 105 80 auto inactive 3 DS3400_03_21 online 1 4 300.00GB 256 4.00GB 296.00GB 296.00GB 296.00GB 98 80 auto inactive 4 DS3400_03_22 online 1 4 300.00GB 256 4.00GB 296.00GB 296.00GB 296.00GB 98 80 auto inactive 5 DS3400_09_11 offline 1 4 300.00GB 256 4.00GB 296.00GB 296.00GB 296.00GB 98 80 auto inactive 6 DS3400_09_12 offline 1 4 300.00GB 256 4.00GB 296.00GB 296.00GB 296.00GB 98 80 auto inactive 7 DS3400_09_21 offline 1 4 300.00GB 256 4.00GB 296.00GB 296.00GB 296.00GB 98 80 auto inactive 8 DS3400_09_22 offline 1 4 300.00GB 256 4.00GB 296.00GB 296.00GB 296.00GB 98 80 auto inactive 9 DS3400_03_13 online 1 4 300.00GB 256 4.00GB 296.00GB 296.00GB 296.00GB 98 80 auto inactive 10 DS3400_03_14 online 1 4 300.00GB 256 4.00GB 296.00GB 296.00GB 296.00GB 98 80 auto inactive 11 DS3400_03_23 online 1 4 300.00GB 256 4.00GB 296.00GB 296.00GB 296.00GB 98 80 auto inactive 12 DS3400_03_24 online 1 4 300.00GB 256 4.00GB 296.00GB 296.00GB 296.00GB 98 80 auto inactive 13 DS5020_11 offline 1 6 600.00GB 256 303.50GB 316.00GB 296.00GB 296.43GB 52 80 auto inactive 14 DS5020_21 offline 1 4 600.00GB 256 304.00GB 296.00GB 296.00GB 296.00GB 49 80 auto inactive 15 DS5020_12 offline 1 4 600.00GB 256 304.00GB 296.00GB 296.00GB 296.00GB 49 80 auto inactive 16 DS5020_22 offline 1 4 600.00GB 256 304.00GB 296.00GB 296.00GB 296.00GB 49 80 auto inactive
930
As you can see from the output, and because of the critical event, some storage pools are offline and others are still online. The storage pools that are offline will be the ones which had space allocated on the storage subsystem which suffered a loss of power. g. Check the MDisk status as shown in Example C-17.
Example: C-17 MDisk status IBM_2145:Split_Cluster_1:admin>lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID tier 0 DS3400_09_11 offline managed 5 DS3400_09_11 300.0GB 0000000000000000 DS3400_09 600a0b8000369a8100000ac44e4dd8cc00000000000000000000000000000000 generic_hdd 1 DS3400_09_12 offline managed 6 DS3400_09_12 300.0GB 0000000000000001 DS3400_09 600a0b80003743e800000e184e4ddbbb00000000000000000000000000000000 generic_hdd . lines omitted for brevity . 10 DS3400_03_23 online managed 11 DS3400_03_23 300.0GB 0000000000000005 DS3400_03 600a0b800036972a000017644e55b23200000000000000000000000000000000 generic_hdd 11 DS3400_03_14 online managed 10 DS3400_03_14 300.0GB 0000000000000006 DS3400_03 600a0b800036a5c0000015224e55b1c100000000000000000000000000000000 generic_hdd 12 DS3400_03_24 online managed 12 DS3400_03_24 300.0GB 0000000000000007 DS3400_03 600a0b800036a5c0000015244e55b20e00000000000000000000000000000000 generic_hdd 13 mdisk0 offline managed 13 DS5020_11 600.0GB 0000000000000000 DS5020 60080e5000182ec60000b07e4e560bfc00000000000000000000000000000000 generic_hdd 14 mdisk1 offline managed 14 DS5020_21 600.0GB 0000000000000001 DS5020 60080e5000182ec60000b0834e560c3b00000000000000000000000000000000 generic_hdd 15 mdisk2 offline managed 15 DS5020_12 600.0GB 0000000000000002 DS5020 60080e5000182ec60000b0814e560c1e00000000000000000000000000000000 generic_hdd 16 mdisk3 offline managed 16 DS5020_22 600.0GB 0000000000000003 DS5020 60080e5000182ec60000b0864e560c5800000000000000000000000000000000 generic_hdd
We can get the same information from the GUI as shown in Figure C-17 on page 932.
Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines
931
As you can see from the output in Example C-18, although we have lost 50% of the resources due to a loss of power in site 1, the volumes are not offline, but in a degraded state. This is 932
IBM System Storage SAN Volume Controller V6.3
because Volume Mirroring acted to guarantee business continuity, and the volumes will be still accessible from the hosts that are still running in site 2 where power is still present. In this case it might be helpful to use the filtervalue option in the SVC CLI command in order to reduce the number of lines produced and volumes to check as shown in Example C-19.
Example: C-19 Volume status IBM_2145:Split_Cluster_1:admin>lsvdisk -nohdr -filtervalue copy_count=2 0 SLES_3650_10_01 0 io_grp0 degraded many many 100.00GB many 60050768018586084800000000000000 0 2 empty 0 no 1 SLES_3650_11_01 0 io_grp0 degraded many many 100.00GB many 60050768018586084800000000000001 0 2 empty 0 no 31 MS_3650_05_08 0 io_grp0 degraded many many 48.00GB many 6005076801858608480000000000001F 0 2 empty 0 no . lines omitted for brevity . 32 test2 0 io_grp0 degraded many many 10.00GB many 60050768018586084800000000000020 0 2 empty 2 no 33 test3 0 io_grp0 degraded many many 10.00GB many 60050768018586084800000000000021 0 2 empty 2 no
As you can see from the output in Example C-19, one copy of each volume is offline and you can also see which storage pool this is related to. We can get the same information from the GUI as shown in Figure C-18.
Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines
933
As you can see from Figure C-18 it is pretty easy to understand which resources are online, which are not, and why we have a degraded status related to each volume. 3. Check path status Check the status of the storage paths from your hosts point of view using your multipathing software commands. For SVC it is recommended to use SDD (Subsystem Device Driver) as multipath software. For further and detailed information about SDD commands refer to: http://www-01.ibm.com/support/docview.wss?rs=540&context=ST52G7&uid=ssg1S700030 3 Or refer to the Multipath Subsystem Device Driver User's Guide GC52-1309-03 You can also verify the SDD vpath device configuration by entering the lsvpcfg or datapath query device command. All the above steps are also valid for a limited failure where we are facing a failure with limited impact in one of the sites. In a case of a limited failure it could be helpful to use the following steps to verify the status of your Split I/O Group infrastructure. 4. Check your SAN using the FC switch or director CLI or Web interface in order to verify any failure 5. Check the FC connection between two sites (passive WDM and links) using the FC switch or director CLI or Web interface in order to verify any failure 6. Check the storage subsystem status using its own management interface in order to verify any failure After going through steps 1 through 6, and you have identified what was the root cause and the impact of the event on your infrastructure, you have all the information to take one of the following strategic decisions: Wait until the failure in one of the two sites is fixed Or Declare a disaster and start with the recovery actions that will be described in the Recovery Guidelines section If you have decided to wait until the failure in one of the two sites is fixed, and when the resources that are impacted become available again, the SVC Split I/O Group will be fully operational and: Automatic Volume Mirroring resynchronization will take place Missing nodes will rejoin the SVC clustered system If the impact of the failure is more serious and you are forced to declare a disaster, you will have to take a more strategic decision discussed later in , Recovery guidelines on page 935.
934
If you have implemented a Split I/O Group configuration with ISL in addition to the checklist steps described in , Diagnosis Guidelines for NO ISL configuration on page 916 you should execute the following verification steps: 1. Check your SAN using the FC switch or director CLI or Web interface in order to verify any partial failure related to a single switch, director or virtual SAN (Public or Private). 2. Check the FC connections between two sites (active WDM and ISL) using the FC switch/director CLI or Web interface in order to verify any partial failure. 3. Check the ISL link status using your active WDM management interface. 4. Check the status of the quorum disk links using your SAN FC switch or director CLI or Web interface. After you have identified the root cause and the impact of the event on your infrastructure, you have all the information to take one of the following decisions: Wait until the failure in one of the two sites is fixed Or Declare a disaster and start with the recovery action described in , Recovery guidelines on page 935. If you decide to wait until the failure in one of the two sites is fixed, and when the resources that are impacted become available, the SVC Split I/O Group will be fully operational and: Automatic Volume Mirroring resynchronization will take place Missing nodes will rejoin the SVC clustered system
Recovery guidelines
In this section we will explore some recovery scenarios. Regardless of the different scenarios, the common starting point will be the complete loss of site 1 or site 2 caused by a severe critical event.
Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines
935
After an initial analysis phase of the event a strategic decision has to be made: Wait until the lost site is restored Or Start a recovery procedure so that the surviving site configuration will be rebuilt such that it provides the same performance and availability characteristics as it did before the event If the recovery times are too long and the lost site cannot wait for its eventual return to life you will need to take the appropriate recovery actions.
What do you need to supply to recover the Split I/O Group configuration
If you have arrived at this point, it is because you cannot wait for the site to be brought back to life in a reasonable time so you will need to take some recovery actions. The answers to the following questions determine the appropriate recovery action: Where do you want to recover to? In the same site or in a new site? Is it a temporary or permanent recovery? If it is a temporary recovery, do we need to plan a failback scenario? Does the recovery action address performance issues or business continuity issues? It is almost certain we will need additional storage space, additional SVC nodes and additional SAN components: Do we plan to use brand new nodes supplied by IBM? Do we plan to reuse other, existing SVC nodes, that might be being used for non-business-critical applications (test environment) at the moment? Do we plan to use new FC SAN switches or directors? Do we plan to reconfigure FC SAN switches or directors in order to host newly acquired SVC nodes and storage? Do we plan to use new back-end storage subsystems? Do we plan to configure some free space on the surviving storage subsystem(s) in order to host the space required for Volume Mirroring? The answers to these questions will direct the recovery strategy, investment and monetary steps to take, which cannot be improvised but must be part of a recovery plan in order to create a minimal impact to applications and therefore service levels. We will describe in detail what the recovery guidelines will be, assuming that we have already answered the above questions and we have decided to recover a fully redundant configuration in the same surviving site, and supplying new SVC nodes, new storage subsystems, and new FC SAN devices. We will also give some indication about how to reuse SVC nodes that are already available, storage or SAN devices, and guidelines on how to plan a failback scenario. If you do need to recover your Split I/O Group infrastructure, we recommend that you involve IBM Support as early as possible..
936
We have decided to recover the configuration exactly as it was, using passive WDM, even if it has been recovered in the same site in order to make it easier in the future to implement this configuration over distance when a new site is provided by simply executing the following major steps: 1. Disconnect the links between passive WDM 2. Uninstall/reinstall all the brand new devices in the brand new site 3. Reconnect the links between passive WDM The following steps have to be executed in order to recover your Split I/O Group configuration as it was before the critical event in the same site, and after you have installed the new devices.
Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines
937
1. Restore your back-end storage subsystem configuration as it was, starting from your backup take as suggested earlier. LUN masking can be done in advance because the SVC nodes WWNN is already known. 2. Restore your SAN configuration exactly as it was before the critical event. This could be done by just configuring the new switches with the same domain id as before and connecting them to the surviving switches through the passive WDM. In this way the WWPN zoning propagation will automatically propagate to the new switches. 3. Connect, if possible, the new storage subsystems to exactly the same FC switch ports as before the critical event. SVC to storage zoning has to be reconfigured in order to be able to see the new storage subsystems WWNN. Old WWNNs can be removed but care to remove the right ones, because at this time we have just one volume copy active. 4. Do not connect the SVC node FC just yet. Wait until directed to do so by the SVC nodes WWNN change procedure. 5. Remove the offline node from the SVC system configuration with the SVC CLI commands as shown in Example C-20.
Example: C-20 Remove node command IBM_2145:Split_Cluster_1:admin>lsnode id name UPS_serial_number WWNN status IO_group_id IO_group_name config_node UPS_unique_id hardware iscsi_name iscsi_alias panel_name enclosure_id canister_id enclosure_serial_number 1 node_159072 100014P293 500507680100C109 offline 0 io_grp0 no 2040000044802243 CG8 iqn.1986-03.com.ibm:2145.splitcluster1.node159072 159072 2 node_159680 100013I066 500507680100C478 online 0 io_grp0 yes 2040000043640186 CG8 iqn.1986-03.com.ibm:2145.splitcluster1.node159680 159680 IBM_2145:Split_Cluster_1:admin>rmnode node_159072 IBM_2145:Split_Cluster_1:admin>lsnode id name UPS_serial_number WWNN status IO_group_id IO_group_name config_node UPS_unique_id hardware iscsi_name iscsi_alias panel_name enclosure_id canister_id enclosure_serial_number 2 node_159680 100013I066 500507680100C478 online 0 io_grp0 yes 2040000043640186 CG8 iqn.1986-03.com.ibm:2145.splitcluster1.node159680 159680
938
6. First identify which copy id is offline for each volume using the SVC CLI command as shown in Example C-21.
Example: C-21 How to identify Volume copy id IBM_2145:Split_Cluster_1:admin>lsvdisk 0 id 0 name SLES_3650_10_01 IO_group_id 0 IO_group_name io_grp0 status degraded mdisk_grp_id many mdisk_grp_name many . lines omitted for brevity . copy_id 0 status online sync yes primary yes mdisk_grp_id 1 mdisk_grp_name DS3400_03_11 . lines omitted for brevity . copy_id 1 status offline . lines omitted for brevity . tier generic_hdd tier_capacity 100.00GB
Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines
939
i. Remove each identified offline Volume Mirroring copy with the SVC CLI command as shown in Example C-22.
Example: C-22 rmvdiskcopy IBM_2145:Split_Cluster_1:admin>rmvdiskcopy -copy 1 SLES_3650_10_01 IBM_2145:Split_Cluster_1:admin>lsvdisk id 0 name SLES_3650_10_01 status degraded . Lines omitted for brevity . copy_id 0 status online sync yes primary yes Lines omitted for brevity SLES_3650_10_01
940
7. Power on the new node and leave the FC cable disconnected 8. Change the new node WWNN using the following procedure: a. Power-on the replacement node from the front panel with the Fibre Channel cables and the Ethernet cable disconnected. Once the node has booted, you may receive error 540, An Ethernet port has failed on the 2145 and/or error 558, The 2145 cannot see the fibre-channel fabric or the fibre-channel card port speed might be set to a different speed than the Fibre Channel fabric. This is to be expected as the node was booted with no fiber-optic cables connected and no LAN connection. If you see Error 550, Cannot form a cluster due to a lack of cluster resources, then this node still thinks it is part of an SVC clustered system. If this is a new node from IBM this should not have occurred. Change the WWNN of the replacement node to match the WWNN that you recorded earlier by following these steps: b. From the front panel of the new node, press the down button until the Node: panel is displayed and then use the right or left navigation button to display the Node WWNN: panel. Press and hold the down button, press and release the select button and then release the down button. On line one should be Edit WWNN: and on line two are the last five numbers of this new nodes WWNN. c. Press and hold the down button, press and release the select button and then release the down button to enter WWNN edit mode. The first character of the WWNN is highlighted.
Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines
941
Note: When changing the WWNN you may receive error 540, An Ethernet port has failed on the 2145 and/or error 558, The 2145 cannot see the FC fabric or the FC card port speed might be set to a different speed than the Fibre Channel fabric. This is to be expected as the node was booted with no fiber-optic cables connected and no LAN connection. However, if this error occurs while you are editing the WWNN, you will be taken out of edit mode with partial changes saved. You will need to reenter edit mode by starting again at step b. d. Press the up or down button to increment or decrement the character that is displayed. The characters wrap F to 0 or 0 to F. e. Press the left navigation button to move to the next field or the right navigation button to return to the previous field and repeat step d for each field. At the end of this step, the characters that are displayed must be the same as the WWNN you recorded in step a. f. Press the select button to retain the characters that you have updated and return to the WWNN panel. g. Press the select button again to apply the characters as the new WWNN for the node. Important: You must press the select button twice as steps f and g instruct you to do. After step f it may appear that the WWNN has been changed, but it is step g that applies the change. h. Ensure the WWNN has changed by following steps a again. 9. Connect the node to the same FC switch ports as it was before the critical event. This is the most key point of the recovery procedure, because connecting the new SVC nodes to the same SAN ports and reusing the same SVC WWNN will avoid to reboot, rediscover, reconfigure or create any impact from the host point of view in order to see the lost disk resources and paths back to life. Important: Do not connect the new nodes to different ports at the switch or director as this will cause port ids to change which could impact hosts access to volumes or cause problems with adding the new node back into the clustered system. If you are not able to connect the SVC nodes to the same FC SAN ports as before, consider that you will be forced to reboot, rediscover or reconfigure your host in order to see the lost disk resources and bring the paths back to life. 10.Issue the SVC CLI command as shown in Example C-23 to verify that the last five characters of the WWNN are correct.
Example: C-23 Verify candidate node with correct WWNN IBM_2145:Split_Cluster_1:admin>lsnodecandidate id panel_name UPS_serial_number UPS_unique_id hardware 500507680100C109 159072 100014P293 2040000044802243 CG8
Important: If the WWNN does not match the original nodes WWNN exactly as recorded you must repeat steps 8b to 8g. 11.Add the node to the clustered system and ensure it is added back to the same I/O group as the original node with the SVC CLI commands as shown in Example C-24.
942
Example: C-24 Adding node IBM_2145:Split_Cluster_1:admin>addnode -wwnodename 500507680100C109 -iogrp 0 Node, id [3], successfully added IBM_2145:Split_Cluster_1:admin>lsnode id name UPS_serial_number WWNN status IO_group_id IO_group_name config_node UPS_unique_id hardware iscsi_name iscsi_alias panel_name enclosure_id canister_id enclosure_serial_number 3 node_159072 100014P293 500507680100C109 online 0 io_grp0 no 2040000044802243 CG8 iqn.1986-03.com.ibm:2145.splitcluster1.node159072 159072 2 node_159680 100013I066 500507680100C478 online 0 io_grp0 yes 2040000043640186 CG8 iqn.1986-03.com.ibm:2145.splitcluster1.node159680 159680
12.Verify that all volumes for this I/O group are back online and are no longer degraded. If the node replacement process is being done disruptively, such that no I/O is occurring to the I/O group, you still need to wait a period of time (we recommend 30 minutes) to make sure the new node is back online and available to take over before you do the next node in the I/O group. Use the SVC CLI command shown in Example C-25 to verify that all volumes for this I/O group are back online and are no longer degraded.
Example: C-25 No longer degraded volume IBM_2145:Split_Cluster_1:admin>lsvdisk -filtervalue status=degraded IBM_2145:Split_Cluster_1:admin>
Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines
943
13.Discover the new MDisk supplied by new back-end storage subsystem. They will appear as status online with a mode of unmanaged as shown in Example C-26.
Example: C-26 New MDisk discovered IBM_2145:Split_Cluster_1:admin>detectmdisk IBM_2145:Split_Cluster_1:admin>lsmdisk -filtervalue mode=unmanaged id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID tier 0 DS3400_09_11 online unmanaged 300.0GB 0000000000000000 DS3400_09 600a0b8000369a8100000ac44e4dd8cc00000000000000000000000000000000 generic_hdd 1 DS3400_09_12 online unmanaged 300.0GB 0000000000000001 DS3400_09 600a0b80003743e800000e184e4ddbbb00000000000000000000000000000000 generic_hdd 2 DS3400_09_21 online unmanaged 300.0GB 0000000000000002 DS3400_09 600a0b8000369a8100000ac74e4dd8fa00000000000000000000000000000000 generic_hdd 3 DS3400_09_22 online unmanaged 300.0GB 0000000000000003 DS3400_09 600a0b80003743e800000e1a4e4ddbe900000000000000000000000000000000 generic_hdd 13 mdisk0 online unmanaged 600.0GB 0000000000000000 DS5020 60080e5000182ec60000b07e4e560bfc00000000000000000000000000000000 generic_hdd 14 mdisk1 online unmanaged 600.0GB 0000000000000001 DS5020 60080e5000182ec60000b0834e560c3b00000000000000000000000000000000 generic_hdd 15 mdisk2 online unmanaged 600.0GB 0000000000000002 DS5020 60080e5000182ec60000b0814e560c1e00000000000000000000000000000000 generic_hdd
944
14.Add the MDisks to the storage pool using the SVC CLI commands as shown in Example C-27 and recreate the MDisk to storage pool relationship that existed before the critical event. Important: You should remove the previous MDisk(s) that are still defined in each storage pool but are no longer physically existing after the critical event (they may appear in an offline or degraded state to the SVC), before you add the newly discovered MDisk(s).
Example: C-27 Adding new MDisk to Storage pool IBM_2145:Split_Cluster_1:admin>addmdisk -mdisk DS3400_09_11 DS3400_09_11 IBM_2145:Split_Cluster_1:admin>addmdisk -mdisk DS3400_09_12 DS3400_09_12 IBM_2145:Split_Cluster_1:admin>addmdisk -mdisk DS3400_09_21 DS3400_09_21
Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines
945
Important: After you have re-added your newly discovered MDisks to the storage pool, the three quorum tandem will be automatically fixed. We can check this with the SVC CLI command as shown in Example C-28.
Example: C-28 Quorum status IBM_2145:Split_Cluster_1:admin>lsquorum quorum_index status id name controller_id override 0 online 8 DS4700_SVC_Q1 2 1 online 0 DS3400_09_11 1 2 online 4 DS3400_03_11 0 controller_name active object_type DS4700 DS3400_09 DS3400_03 yes no no mdisk mdisk mdisk yes yes yes
15.Reactivate Volume Mirroring for each volume in accordance with your Volume Mirroring requirements in order to recreate the same business continuity infrastructure as before the critical event using the SVC CLI command as shown in Example C-29.
Example: C-29 addvdiskcopy example IBM_2145:Split_Cluster_1:admin>addvdiskcopy -mdiskgrp DS3400_09_12 SLES_3650_11_02 Vdisk [3] copy [1] successfully created
Or by using the GUI as shown in Figure C-28 on page 947 and in Figure C-29 on page 947.
946
16.Check the status of your Volume Mirroring synchronization progress with the SVC CLI command as shown in Example C-30.
Example: C-30 lsvdisksyncprogress example IBM_2145:Split_Cluster_1:admin>lsvdisksyncprogress vdisk_id vdisk_name copy_id progress estimated_completion_time 0 SLES_3650_10_01 1 14 111012121307 1 SLES_3650_11_01 1 13 111012121421 2 SLES_3650_10_02 1 13 111012121455 3 SLES_3650_11_02 1 11 111012121709
It is possible to speed up the synchronization progress with the chvdisk command but the more speed you give to the synchronization process, the more impact on the overall performance you may have. 17.Now with the following SVC CLI command, you could consider rebalancing your Split I/O Group configuration in order to have the Volume Mirroring Primary copy related with the storage pool and preferred node as it was before the critical event even if they are now
Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines
947
in the same site. Doing that will help you with an eventual future stretch of your configuration when a new remote site becomes available. You can do that using SVC CLI command as shown in Example C-31.
Example: C-31 Change Volume primary copy id IBM_2145:Split_Cluster_1:admin>chvdisk -primary 1 SLES_3650_11_01
You have completed your procedure to recover a Split I/O Group configuration after a critical event. At this point all your volumes will be accessible from your hosts point of view and the recovery action has not impacted your applications.
948
If you have implemented a Split I/O Group configuration with ISL in addition to the steps described in , Recovery Guidelines for No ISL configuration on page 937 you will have to: 1. Restore your SAN configuration (Private and Public) according to your documentation 2. Restore your active/passive WDM configuration in order to re-establish the ISL between the two sites.
Appendix C. SAN Volume Controller Split I/O Group Overview, Diagnostics, and Recovery Guidelines
949
950
7933bibl.fm
Related publications
The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this book.
IBM Redbooks
The following IBM Redbooks publications provide additional information about the topic in this document. Note that some publications referenced in this list might be available in softcopy only. Introduction to Storage Area Networks, SG24-5470 IBM System Storage: Implementing an IBM SAN, SG24-6116 DS4000 Best Practices and Performance Tuning Guide, SG24-6363 IBM System Storage Business Continuity: Part 1 Planning Guide, SG24-6547 IBM System Storage Business Continuity: Part 2 Solutions Guide, SG24-6548 Get More Out of Your SAN with IBM Tivoli Storage Manager, SG24-6687 IBM Tivoli Storage Area Network Manager: A Practical Introduction, SG24-6848 DS8000 Performance Monitoring and Tuning, SG24-7146 Monitoring Your Storage Subsystems with TotalStorage Productivity Center, SG24-7364 Using the SVC for Business Continuity, SG24-7371 SAN Volume Controller: Best Practices and Performance Guidelines, SG24-7521 SAN Volume Controller V4.3.0 Advanced Copy Services, SG24-7574 IBM XIV Storage System: Architecture, Implementation and Usage, SG24-7659 IBM Tivoli Storage Productivity Center V4.1 Release Guide, SG24-7725 IBM SAN Volume Controller 4.2.1 Cache Partitioning, REDP-4426
Other publications
These publications are also relevant as further information sources: IBM System Storage SAN Volume Controller: Planning Guide, GA32-0551 IBM System Storage Open Software Family SAN Volume Controller: Planning Guide, GA22-1052 IBM System Storage SAN Volume Controller: Service Guide, GC26-7901 IBM System Storage SAN Volume Controller Model 2145-8A4 Hardware Installation Guide, GC27-2219 IBM System Storage SAN Volume Controller Model 2145-8G4 Hardware Installation Guide, GC27-2220 IBM System Storage SAN Volume Controller Models 2145-8F2 and 2145-8F4 Hardware Installation Guide, GC27-2221
951
7933bibl.fm
IBM SAN Volume Controller Software Installation and Configuration Guide, GC27-2286 IBM System Storage SAN Volume Controller Command-Line Interface Users Guide, GC27-2287 IBM System Storage Master Console: Installation and Users Guide, GC30-4090 Multipath Subsystem Device Driver Users Guide, GC52-1309 IBM System Storage SAN Volume Controller Model 2145-CF8 Hardware Installation Guide, GC52-1356 IBM System Storage Productivity Center Software Installation and Users Guide, SC23-8823 IBM System Storage Productivity Center Introduction and Planning Guide, SC23-8824 Subsystem Device Driver Users Guide for the IBM TotalStorage Enterprise Storage Server and the IBM System Storage SAN Volume Controller, SC26-7540 IBM System Storage Open Software Family SAN Volume Controller: Installation Guide, SC26-7541 IBM System Storage Open Software Family SAN Volume Controller: Service Guide, SC26-7542 IBM System Storage Open Software Family SAN Volume Controller: Configuration Guide, SC26-7543 IBM System Storage Open Software Family SAN Volume Controller: Command-Line Interface Users Guide, SC26-7544 IBM System Storage Open Software Family SAN Volume Controller: CIM Agent Developers Reference, SC26-7545 IBM System Storage Open Software Family SAN Volume Controller: Host Attachment Guide, SC26-7563 Command-Line Interface Users Guide, SC27-2287 IBM System Storage Productivity Center Users Guide Version 1 Release 4, SC27-2336 IBM TotalStorage Multipath Subsystem Device Driver Users Guide, SC30-4096 IBM System Storage SAN Volume Controller V5.1.0 - Host Attachment Guide, SG26-7905 IBM Tivoli Storage Productivity Center IBM Tivoli Storage Productivity Center for Replication Installation and Configuration Guide, SC27-2337 IBM TotalStorage Multipath Subsystem Device Driver Users Guide, SC30-4096
Online resources
These websites are also relevant as further information sources: IBM TotalStorage home page http://www.storage.ibm.com SAN Volume Controller supported platform http://www-1.ibm.com/servers/storage/support/software/sanvc/index.html Download site for Windows Secure Shell (SSH) freeware http://www.chiark.greenend.org.uk/~sgtatham/putty
952
7933bibl.fm
IBM site to download SSH for AIX http://oss.software.ibm.com/developerworks/projects/openssh Open source site for SSH for Windows and Mac http://www.openssh.com/windows.html Cygwin Linux-like environment for Windows http://www.cygwin.com IBM Tivoli Storage Area Network Manager site http://www-306.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageAreaNe tworkManager.html Microsoft Knowledge Base Article 131658 http://support.microsoft.com/support/kb/articles/Q131/6/58.asp Microsoft Knowledge Base Article 149927 http://support.microsoft.com/support/kb/articles/Q149/9/27.asp Sysinternals home page http://www.sysinternals.com Subsystem Device Driver download site http://www-1.ibm.com/servers/storage/support/software/sdd/index.html IBM TotalStorage Virtualization home page http://www-1.ibm.com/servers/storage/software/virtualization/index.html SVC support page http://www-947.ibm.com/systems/support/supportsite.wss/selectproduct?taskind=4& brandind=5000033&familyind=5329743&typeind=0&modelind=0&osind=0&psid=sr&continu e.x=1 SVC online documentation http://publib.boulder.ibm.com/infocenter/svcic/v3r1m0/index.jsp lBM Redbooks publications about SVC http://www.redbooks.ibm.com/cgi-bin/searchsite.cgi?query=SVC
Related publications
953
7933bibl.fm
954
7933IX.fm
Index
Symbols
?????? 386 backup time 374 balance 84 bandwidth 68, 84, 9697, 445, 881, 884 bandwidth impact 459 basic setup requirements 137 BB 905 bind 224 bit array 374 bitmap 37, 374 bitmap space 27 block-level protocol 156 boot 100 boss node 39 bottleneck 60 bottlenecks 102103 budget 30 budget allowance 30 buffers 907 buffer-to-buffer 905906 burst traffic 903 business requirements 102, 880
Numerics
10 GbE 156
A
active quorum disk 40 add a node 527 add additional ports 671 add an HBA 484 Add SSH Public Key 136 addressable extents 19 administration tasks 659 Advanced Copy Services 94 Advanced Settings 664, 827, 829 AIX host system 171 AIX specific information 162 AIX toolbox 171 AIX-based hosts 162 alias 31 alias string 157 aliases 31 analysis 102, 880 application server guidelines 93 application testing 375 assign VDisks 502 assigned VDisk 166 asynchronous notifications 398399 asynchronous remote 435 asynchronous remote copy 36, 412, 435436 asynchronous replication 457 asynchronously 435 authentication 53, 139, 159 authentication service 56 automate tasks 511 automatic Linux system 200 automatic update process 200 automatically discover 472 automation 511 auxiliary 446, 569, 592 auxiliary VDisk 436, 447, 454 available managed disks 473
C
cable connections 73 cable length 59 cache 25, 42, 389, 436, 881 cache disabled 25 cache mode 904 caching 103, 881 caching capability 102, 880 candidate node 528 capacity 91 capacity measurement 685 CDB 31 Challenge Handshake Authentication Protocol 81 challenge message 34 Challenge-Handshake Authentication Protocol 34, 159, 482 change the IP addresses 522 change volumes 595 changes 374 Channel extender 892 channel extender 897 CHAP 34, 81, 159, 482 CHAP authentication 34, 159, 667 CHAP secret 34, 159, 667 chpartnership 459 chrcconsistgrp 461 chrcrelationship 461 chunks 89, 233 CIM agent 43 CIM Client 42 CIMOM 32, 42, 159 CLI 134, 579
B
back-end application 894 background copy 427, 434, 447, 454 background copy bandwidth 459 background copy progress 564, 587 background copy rate 395396 backup 95, 374 of data with minimal impact on production 382
955
7933IX.fm
commands 171 scripting for SVC task automation 511 cloning 95 cluster 38 creation 527 IP address 115 shutting down 471, 524, 536 time zone 522523 cluster (SVC) 892 cluster overview 38 cluster partnership 418 cluster shared disk 183 clustered ethernet port 160 clustered server resources 38 clustered system 76 clustered system configuration 86 cluster-level statistics 882 clusters 68 cold 352 collection interval 881 Colliding writes 437 colliding writes 438 Command Descriptor Block 31 command syntax 515 COMPASS architecture 57 compression 100 concepts 9 concurrent instances 232 concurrent software upgrade 605 config node 882 configurable warning capacity 29 configuration 149 configuration data 16 configuration node 33, 39, 160, 527, 892 configure AIX 162 configure SDD 224 configuring the GUI 118 connected 421422, 448 connected state 424, 449, 451 connectivity 41 consistency 450 consistency freeze 424, 432, 451 Consistency Group 382, 384, 892 consistency group 383 limits 385 consistent 422423, 449450 consistent data set 375 Consistent Stopped state 420, 447 Consistent Synchronized state 421, 447 ConsistentDisconnected 426, 453 ConsistentStopped 424, 451 ConsistentSynchronized 425, 452 container 89 contingency capacity 29 controller, renaming 471 conventional storage 227 cookie crumbs recovery 629 cooling 69 copied state 892 copy bandwidth 97, 459
copy operation 37 copy process 432, 461 copy rate 396 copy rate parameter 95 Copy Services managing 537538 COPY_COMPLETED 398 copying state 543 core-edge 880 counterpart SAN 104, 893, 897 counters 882 CPU cycle 60 CPU utilization 884 create a FlashCopy 540 create a new VDisk 684 create an SVC partnership 768 create mapping command 539540, 739, 751 create SVC partnership 557, 580 creating a VDisk 488 creating managed disk groups 645 credits 905906 current cluster state 40 cycling 595 cycling mode 595, 601 cycling period 595 cyclingmode 597 Cygwin 190
D
data backup with minimal impact on production 382 moving and migration 375 source 385 data change rates 100 data consistency 538 data corruption 450 data flow 77 data migration 69, 232 data migration and moving 375 Data Migration Planner 353 Data Migrator 353 data mining 376 data mover appliance 504 Data Placement Advisor 353 degraded mode 87 delete a FlashCopy 547 a host 484 a host port 486 a port 674, 678, 700, 704 a VDisk 500, 697 ports 485 Delete consistency group command 548 dependent writes 384, 442 destaged 42 destructive 616 detect the new MDisks 472 detected 472 Device Mapper Multipath 207 differentiator 61
956
7933IX.fm
differing storage 880 directory protocol 56 dirty bit 427, 454 disconnected 421422, 448 disconnected state 449 discovering assigned VDisk 166 discovering newly assigned MDisks 645, 653 disk access profile 498 disk controller renaming 642 systems 470 viewing details 470, 641 disk internal controllers 61 disk timeout value 218 disk zone 76 diskpart, see 183 display summary information 473 distance 411, 895 distance extenders 907 distance limitations 411 DM-MPIO 207 documentation 68, 641 dual-redundant ISLs 76 dump I/O statistics 618 I/O trace 618 listing 617 other nodes 619 durability 61 dynamic pathing 221222 dynamic shrinking 705 dynamic tracking 163
expand a VDisk 182, 500 a volume 183 expand a space-efficient VDisk 500 extended distance solutions 411 extended quorum disks 909 extenders 96 extending the distance 96 Extent 893 extent 89, 228 extent level 228 extent migration plan 21 extent size 1920 extent sizes 89 extents 20 allocation 352
F
fabric remote 104 fabric interconnect 895 failover 221, 436 failover only 203 failover situation 411 fast fail 163 FAStT 410 FC optical distance 59 features, licensing 616 featurization log 617 Fibre Channel interfaces 59 Fibre Channel port fan in 104, 897 Fibre Channel Port Login 32 Fibre Channel port logins 894 Fibre Channel ports 73 file system 206 filtering 515, 636 filters 515 FlashCopy 37, 95, 374 bitmap 386 how it works 377, 381 image mode disk 390 indirection layer 385 mapping 376 mapping events 391 serialization of I/O 397 synthesis 397 FlashCopy indirection layer 385 FlashCopy mapping 382 FlashCopy mapping states 393 Copying 393 Idling/Copied 393 Prepared 394 Preparing 394 Stopped 393 Suspended 394 FlashCopy mappings 384 FlashCopy properties 385 FlashCopy rate 95 flexibility 102, 880 flow control 905 Index
E
Easy Tier 21 Easy Tier operating modes 353 elapsed time 95 empty MDG 476 empty state 427, 454 Enterprise Storage Server (ESS) 410 entire VDisk 382 error 424, 448, 451, 474, 615 Error Code 893 error handling 397 Error ID 893 error log 614 error notification 613 ESS (Enterprise Storage Server) 410 ESS to SVC 237 eth0 60 eth1 60 Ethernet 73 Ethernet connection 74 Ethernet ports 81 event 614 event log 617 events 420, 446 Excluded 893 excludes 656 Execute Metro Mirror 563, 585
957
7933IX.fm
foreground I/O latency 459 free extents 500 Freeze time 595 frequency 881 front-end application 894 FRU 894 Full Feature Phase 32 fully allocated 27
Host ID 894 host mapping 22 host object 155 Host Type 664, 667 hot 352 HP-UX support information 221222
I
I/O budget 30 I/O governing 30, 498 I/O governing rate 498 I/O Group 895 I/O group 895 renaming 531 viewing details 531 I/O load 880 I/O Monitoring 353 I/O pair 70 I/O per secs 68 I/O statistics dump 618 I/O trace dump 618 ICAT 4243 identical data 410, 446 idling 425, 452 idling state 432, 461 IdlingDisconnected 425, 452 image 20 Image Mode 895 image mode 235, 726 image mode disk 390 image mode MDisk 235 Image Mode Migration 38 image mode to image mode 268 image mode VDisk 230 image mode virtual disks 92 inappropriate zoning 84 inconsistent 422, 449 Inconsistent Copying state 421, 447 Inconsistent Stopped state 420, 447 InconsistentCopying 424, 451 InconsistentDisconnected 426, 453 InconsistentStopped 424, 450 incremental 37 independent power supply 903 indirection layer 374, 385 indirection layer algorithm 387 informational error logs 398 initiator 156 initiator name 31 initiator port 664, 667 input power 524 install 67 insufficient bandwidth 396 integrity 383384 interaction with the cache 389 intercluster link 418 intercluster link bandwidth 459 intercluster link maintenance 418419, 444 intercluster Metro Mirror 411, 435 intercluster zoning 418419, 444
G
gateway IP address 115 GBICs 895 general housekeeping 641 generating output 516 generator 135 geographically dispersed 410 Global Mirror 36 Global Mirror guidelines 98 Global Mirror relationship 438 Global Mirror remote copy technique 435 gminterdelaysimulation 456 gmintradelaysimulation 456 gmlinktolerance 456457 governing 30 governing rate 30 graceful manner 529 grain 386, 894 grain size 95 grain sizes 95 grains 27, 95, 396 granularity 382 GUI 138
H
Hardware initiator 157 Hardware Management Console 43 hardware nodes 57 hardware overview 57 HBA 480, 894 HBA ports 94 heartbeat 76, 79 heartbeat signal 41 heartbeat traffic 97 heatmap 352 help 641 high availability 38, 68 high-bandwidth link 595 home directory 171 host and application server guidelines 93 configuration 149 creating 480 deleting 670 information 661 showing 508 systems 75 host adapter configuration settings 173 host bus adapter 480, 906 Host failover 161
958
7933IX.fm
interfaces 884 internal counters 882 Internet Storage Name Service 34, 159, 895 interswitch link (ISL) 896 interval 524 intracluster Metro Mirror 410, 435 IP address modifying 521, 841 IP addresses 69, 841, 843, 847 IP subnet 74 ipconfig 144 IPv4 143 IPv6 143 IPv6 addresses 144 IQN 22, 31, 81, 157, 894 IQNs 31 iSCSI 30, 60, 68, 156 iSCSI Address 31 iSCSI client 156 iSCSI HBA 157 iSCSI IP address failover 160 iSCSI Multipathing 34 iSCSI Name 31 iSCSI name 157 iSCSI node 31 iSCSI nodes 157 iSCSI Qualified Name 31, 157, 894 iSCSI qualified name 81 iSCSI Send Target 34 iSCSI session 32 iSCSI Simple Name Server 81 iSCSI target node failover 160 iSCSI traffic 156 iSCSI volume discovery 33 ISL (interswitch link) 896 ISL hop count 411, 435 ISL load 880 ISL Trunking 880 iSNS 34, 81, 159, 895 issue CLI commands 190
LDAP 6, 45, 55 lease expiry 76 license 115 licensing feature 616 licensing feature settings 616 Lightweight Directory Access Protocol 55 limiting factor 102 linear 881 link errors 59 Linux 171 Linux kernel 39 Linux on Intel 199 list dump 617 listing dumps 617 Load balancing 203 Local authentication 44 local cluster 429, 455 Local fabric 895 local fabric interconnect 895 Local users 54 log 881 logged 614 Logical Block Address 427, 454 logical block address 187 logical configuration data 621 logical disks 20 Login Phase 32 logins 664, 667 lower tier 352 lower-bandwidth 6 lower-bandwidth remote mirroring 6 lsrcrelationshipcandidate 460 LU 895 LUN limitations 172 LUN masked 22 LUN masking 34 LUNs 895
M
magnetic disks 61 maintenance levels 173 maintenance tasks 605 Managed 895 Managed disk 895 managed disk 895 working with 641 managed disk group 476 creating 645 viewing 647 Managed Disks 895 managed mode MDisk 235 managed mode to image mode 263 managed mode virtual disk 92 management 102, 880 map a VDisk to a host 501 mapping 381 mapping events 391 Master 895 master 446 master console 69 Index
J
jumbo frames 34
K
kernel level 200 key 159 key files on AIX 171
L
LAN Interfaces 59 LAN segment 86 last extent 237 latency 97 latency restrictions 6 layer 95 layers 95 LBA 187, 427, 454
959
7933IX.fm
master VDisk 447, 454 masterchange 597 MC 895 MDG 895 MDG level 476 MDGs 69 MDisk 69, 895 adding 476, 650, 654 discovering 471, 653 including 474, 656 information 650 modes 235 name parameter 473 removing 480, 650, 655 renaming 474, 652 showing 507 showing in group 476 MDisk group creating 479, 645 deleting 479, 649 renaming 479, 648 showing 476, 507 viewing information 478 MDiskgrp 895 MDisks 884 Metro Mirror 36, 410 Metro Mirror consistency group 430431, 433434, 460463 Metro Mirror features 412, 436 Metro Mirror process 445 Metro Mirror relationship 430, 432, 434, 438, 460461, 463 microcode 41 Microsoft Active Directory 55 Microsoft Cluster 183 Microsoft Multi Path Input Output 173 Microsoft Volume Shadow Copy Service 191 migrate 227 migrate a VDisk 230 migrate between MDGs 230 migrate data 235 migrate VDisks 503 migrating multiple extents 228 migration algorithm 233 functional overview 232 operations 228 overview 228 tips 237 migration activities 228 migration plan 352 migration process 504 migration progress 232 migration report 22 migration threads 228 mirrored 436 mirrored copy 435 mirrored volume 26 mkpartnership 459 mkrcconsistgrp 460
mkrcrelationship 460 MLC 60 modify a host 483 modifying a VDisk 497 mount 206 mount point 206 moving and migrating data 375 MPIO 93, 161, 173 MSCS 183 MTU 34 MTU sizes 34 multi layer cell 60 multipath I/O 93 multipath storage solution 174 multipathing device driver 93 multipathing driver 161 Multipathing drivers 34 multiple disk arrays 102, 880 multiple extents 228 multiple paths 34 multiple virtual machines 212 multiprotocol routers 96
N
network bandwidth 100 Network Entity 157 network interface cards 156 Network Portals 157 Network Time Protocol 853 new mapping 501 NICs 156 Node 896 node 39, 526 adding 527 deleting 528 failure 397 port 894 renaming 528 shutting down 529 viewing details 526 node details 526 node dumps 619 node level 526 Node Unique ID 39 nodes 68 non-preferred path 221 non-redundant 893 non-zero contingency 29 N-port 896 NTP 853
O
offline rules 230 offload features 33 older disk systems 103 on screen content 515, 636 online help 641 on-screen content 515 OpenSSH 171
960
7933IX.fm
OpenSSH client 190 operating modes 353 operating system versions 173 ordering 384 organizing on-screen content 515 other node dumps 619 overall performance needs 68 overloaded 76 Oversubscription 896 oversubscription 880, 896 overwritten 381, 611
P
package numbering and version 605, 856 parallelism 232 partial last extent 236 partner node 160 partnership 16, 456 passphrase 135 path failover 221 path failure 398 path offline 398 path offline for source VDisk 398 path offline for target VDisk 398 path offline state 398 path-selection policy algorithms 203 peak 459 peak workload 97 pended 30 per cluster 232 per managed disk 233 per node statistics 882 performance 91, 880 performance advantage 102, 880 performance considerations 880881 performance function 21 performance improvement 102, 880 performance monitoring tool 98 performance requirements 68 performance scalability 38 performance statistics 98, 881 physical location 69 physical planning 69 physical rules 70 physical site 69 Physical Volume Links 222 PiT consistent data 374 PiT copy 386 planning rules 68 plink 512 PLOGI 32 Point-in-Time 37 point-in-time copy 423, 450 policing 30 policy decision 428, 454 port adding 484, 671 deleting 485, 674 port binding 224 Port Mask 664, 667
port mask 94, 155 port masking 155 port speeds 78 PortChanneling 880 Power Systems 171 PPRC background copy 427, 434, 454 commands 428, 455 configuration limits 455 detailed states 423, 450 preferred access node 92 preferred node 25, 902 preferred path 221 pre-installation planning 68 Prepare 896 prepare (pre-trigger) FlashCopy mapping command 541 PREPARE_COMPLETED 398 preparing volumes 170 pre-trigger 541 primary 436, 569, 592 primary clustered system 99 primary copy 27 priority 504 priority setting 504 private key 133, 135, 171 production VDisk 454 provisioning 459 public key 133, 135, 171, 512 PuTTY 43, 134, 137, 525 CLI session 141 default location 135 security alert 142 putty 6 PuTTY application 141, 529 PuTTY Installation 190 PuTTY Key Generator 135136 PuTTY Key Generator GUI 134 PuTTY Secure Copy 608 PuTTY session 142 PuTTY SSH client software 190 PVLinks 222
Q
QLogic HBAs 200 Quality Of Service 30 Queue Full Condition 30 quiesce 525 quorum candidates 40 Quorum Disk 39 quorum disk 18, 39, 902903, 926 quorum disk candidate 40 quorum disk placement 902 quorum disks 87 quorum index 926 quorum status 925 quorum support 909
R
RAID 896 Index
961
7933IX.fm
RAID controller 7576 RAID mode 880 RAID size 880 RAMAC 61 RAS 896 real capacity 2829 real-time performance monitoring 884 real-time synchronized 410 reassign the VDisk 503 recall commands 468, 515 Recovery Point Objective 36 recovery point objective 6 recovery procedures 36 Redbooks website Contact us xxviii redundancy 60, 97 redundant 893 Redundant SAN 897 redundant SAN 897 relationship 382, 445 relationship state diagram 420, 446 reliability 91 Reliability, Availability, and Serviceability (RAS) 896 remote 897 Remote authentication 44 remote authentication 45 remote fabric 104, 895 interconnect 895 Remote users 55 remove a disk 187 remove an MDG 479 remove WWPN definitions 485 rename a disk controller 642 rename an MDG 648, 756, 778 rename an MDisk 652, 668, 691, 755, 778 repartitioning 91 replication 95 restart the cluster 525 restart the node 530 restarting 567, 591 restore points 377 Reverse FlashCopy 37, 377 RFC3720 31 rmrcconsistgrp 463 rmrcrelationship 463 rollback 37 round robin 92, 204, 221 round trip delay 907 round trip delay time 903 round-robin 20 round-robin fashion 25 RPO 6, 36, 595
S
sampling interval 881 SAN Boot Support 221, 223 SAN definitions 104 SAN fabric 75 SAN planning 73 SAN Volume Controller 897
documentation 641 general housekeeping 641 help 641 virtualization 42 SAN zoning 133 SATA 99 scalable 103, 881 scalable cluster architecture 881 SCM 61 scripting 428, 454, 511 scripts 183, 511, 891 SCSI 897 SCSI commands 156 SCSI Disk 895 SCSI primitives 471 SDD 9293, 162, 165, 169, 223 SDD (Subsystem Device Driver) 169, 201, 223, 240 SDD Dynamic Pathing 221 SDD installation 166 SDD package version 173 SDDDSM 172, 174 secondary 436 secondary clustered system 99 secondary site 68 secure session 529 Secure Shell (SSH) 133 Secure Shell connection 42 separate physical IP networks 60 sequential 20, 92, 488 serialization 397 serialization of I/O by FlashCopy 397 Service Location Protocol 34, 159, 897 set up Metro Mirror 555, 579 SEV 498 shells 511 short-term status information 884 shrink a VDisk 705 shrinking 705 shrinkvdisksize 505 shut down 183 shut down a single node 529 shut down the cluster 524, 795 Simple Network Management Protocol 37, 428, 454, 474 single layer cell 60 single point in time 37 single point of failure 897 single sign-on 43, 56 single-tiered storage pool 20 site 69, 410 SLC 60 SLP 34, 159, 897 SLP daemon 34 SNIA 2 SNMP 37, 428, 454, 474 SNMP alerts 656 SNMP manager 613 SNMP trap 398 Software initiator 157 software upgrade 605 software upgrade packages 856
962
7933IX.fm
Solid State Drive 39 Solid State Drives 58 solution guidelines 102 sort 639 sorting 639 source 396 space-efficient 6, 491 Space-efficient background copy 445 space-efficient VDisk 505 space-efficient volume 505 special migration 237 Split 87 split brain 18, 39, 901 split cluster 880 split I/O 6 split I/O Group 87 split per second 95 split-cluster 87 splitting the SAN 897 SPoF 897 spreading the load 91 SSD market 61 SSD solution 61 SSH 42, 512 SSH (Secure Shell) 133 SSH Client 43 SSH client 171, 190 SSH client software 133 SSH key 53 SSH keys 133, 137 SSH server 133 SSH-2 134 SSO 56 stack 234 stand-alone Metro Mirror relationship 561, 585 start (trigger) FlashCopy mapping command 543544, 760, 783 start a PPRC relationship command 432, 461 startrcrelationship 461 STAT 352 state 423424, 450451 connected 421, 448 consistent 422423, 449450 ConsistentDisconnected 426, 453 ConsistentStopped 424, 451 ConsistentSynchronized 425, 452 disconnected 421, 448 empty 427, 454 idling 425, 452 IdlingDisconnected 425, 452 inconsistent 422, 449 InconsistentCopying 424, 451 InconsistentDisconnected 426, 453 InconsistentStopped 424, 450 overview 420, 448 synchronized 423, 450 state fragments 422, 449 state overview 421, 455 state transitions 398, 448 states 396, 420, 446
statistics 524 statistics dump 618 Statistics file naming 882 statistics files 881 stop 448 stop FlashCopy consistency group 546, 763 stop FlashCopy mapping command 545 STOP_COMPLETED 398 stoprcconsistgrp 462 stoprcrelationship 461 storage 95 storage cache 41 storage capacity 68 Storage Class Memory 61 storage pool 18 storage tier 21 stripe VDisks 102, 880 striped 20 striped mode 20 striped VDisk 488 subnet mask IP address 115 Subsystem Device Driver (SDD) 169, 201, 223, 240 Subsystem Device Driver Device Specific Module 172 Subsystem Device Driver DSM 174 summary report 352 SUN Solaris support information 221 surviving node 529 suspended mapping 545 SVC basic installation 111 task automation 511 SVC cluster partnership 429, 456 SVC cluster software 857 SVC configuration 68 SVC Console 43 SVC device 898 SVC GUI 43 SVC installations 86 SVC master console 134 SVC node 87 SVC PPRC functions 412 SVC setup 150 SVC superuser 53 svcinfo 468, 626 svcinfo lsfreeextents 232 svcinfo lsmdiskextent 232 svcinfo lsmigrate 232 svcinfo lsVDiskextent 232 svctask 468, 515, 626 svctask mkfcmap 429432, 456, 459461, 539540, 745 switching copy direction 569, 592 switchrcconsistgrp 464 switchrcrelationship 463 symmetrical 2 symmetrical network 896 symmetrical virtualization 2 synchronized 410, 423, 446, 450 synchronizing 445 synchronous reads 234 synchronous writes 234
Index
963
7933IX.fm
synthesis 397 system management IP address 33 System Storage Productivity Center 897
T
T0 37 target 156 Target failover 160 target name 31 TCP/IP packets 156 thin-provisioned 27 threshold level 30 tie breaker 39 tie-break situations 39 tie-breaker 39 tier 18, 22 time 522 time zone 522523 timeout 218 Time-Zero 37 TIP 45 Tivoli Directory Server 55 Tivoli Embedded Security Services 45, 56 Tivoli Integrated Portal 43, 45 Tivoli Storage Productivity Center 43 Tivoli Storage Productivity Center for Data 43 Tivoli Storage Productivity Center for Disk 43 Tivoli Storage Productivity Center for Replication 43 Tivoli Storage Productivity Center Standard Edition 43 tlenecks 880 token facility 56 trace dump 618 traffic 97 traffic profile activity 68 transitions 235 trigger 543544
U
unallocated capacity 186 unallocated region 445 unconfigured nodes 527 undetected data corruption 450 uninterruptible power supply 73, 87, 524, 606 unmanaged MDisk 235 unmap a VDisk 503 up2date 200 updates 200 upgrade 856 upgrade precautions 605 upper tier 352 usage statistics 22 use of Metro Mirror 427, 454 used capacity 29 used free capacity 29 using SDD 169, 201, 223
assigning to host 501 creating 487, 491, 684 creating in image mode 491, 726 deleting 500, 697 discovering assigned 166 expanding 500 I/O governing 497 image mode migration concept 235 information 489 mapped to this host 503 migrating 93, 503, 713 modifying 497 path offline for source 398 path offline for target 398 showing 650 showing for MDisk 506 showing using group 506 shrinking 504, 726 working with 487 VDisk discovery 159 VDisk-to-host mapping 503 deleting 700 Veritas Volume Manager 221 View I/O Group details 531 viewing managed disk groups 647 virtual capacity 28 virtual disk 382, 487, 627, 679 Virtual Machine File System 212 virtualization 42 VLUN 895 VMFS 212214 VMFS datastore 216 Volume I/O governing 30 Volume Mirroring 89 Volume Mirroring Migration 38 volumes 20 target 385 Voting Set 39 voting set 39 vpath configured 168 VSS 191
W
warning capacity 29 warning threshold 505 web interface 224 Windows 2000 host configuration 172, 211 Windows 2000-based hosts 171 Windows host system CLI 190 Windows NT and 2000 specific information 171 working with managed disks 641 workload cycle 98 workloads 880 worldwide port name 164 Write data 42 Write ordering 450 write ordering 416, 442, 449 write performance 27 write through mode 87 write workload 98
V
VDisk 656
964
7933IX.fm
write-through 904 write-through mode 42 WWPNs 164, 480, 485, 664, 673
X
XIV 86 xt 860
Y
YaST Online Update 200
Z
zero buffer 445 zero contingency 29 zero-detection algorithm 29 zone 75 zoning capabilities 76 zoning recommendation 179
Index
965
7933IX.fm
966
To determine the spine width of a book, you divide the paper PPI into the number of pages in the book. An example is a 250 page book using Plainfield opaque 50# smooth which has a PPI of 526. Divided 250 by 526 which equals a spine width of .4752". In this case, you would use the .5 spine. Now select the Spine width for the book and hide the others: Special>Conditional Text>Show/Hide>SpineSize(-->Hide:)>Set . Move the changed Conditional text settings to all files in your book by opening the book file with the spine.fm still open and File>Import>Formats the
7933spine.fm
967
To determine the spine width of a book, you divide the paper PPI into the number of pages in the book. An example is a 250 page book using Plainfield opaque 50# smooth which has a PPI of 526. Divided 250 by 526 which equals a spine width of .4752". In this case, you would use the .5 spine. Now select the Spine width for the book and hide the others: Special>Conditional Text>Show/Hide>SpineSize(-->Hide:)>Set . Move the changed Conditional text settings to all files in your book by opening the book file with the spine.fm still open and File>Import>Formats the
7933spine.fm
968
Back cover